Alternate areas such as Artificial Life, and Simulation of Adaptive Behavior did make some progress in getting full creatures in the eighties and nineties (these two areas and communities were where I spent my time during those years), but they have stalled.
My own opinion is that of course this is possible in principle.
That is a long way from AI systems being better at writing AI systems than humans are.
Here is where we are on simulating brains at the neural level, the other methodology that Singularity worshipers often refer to.
He proposes rather sensible ways of thinking about regulations for Artificial Intelligence deployment, rather than the chicken little “the sky is falling” calls for regulation of research and knowledge that we have seen from people who really, really, should know a little better. [I try to maintain professional language, but sometimes…] For instance, it appears to say that we will go from 1 million grounds and maintenance workers in the US to only 50,000 in 10 to 20 years, because robots will take over those jobs.
Today, there is a story in Market Watch that robots will take half of today’s jobs in 10 to 20 years. How many robots are currently operational in those jobs? How many realistic demonstrations have there been of robots working in this arena? Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site.
For some it comes with an additional benefit of being able to upload their minds to an intelligent computer, and so get eternal life without the inconvenience of having to believe in a standard sort of supernatural God.
Why are people making mistakes in predictions about Artificial Intelligence and robotics, so that Oren Etzioni, I, and others, need to spend time pushing back on them?This has been incredibly useful for understanding how behavior and neurons are linked.But it has been a thirty years study with hundreds of people involved, all trying to understand just 302 neurons.Modern day AGI research is not doing at all well on being either general or getting to an independent entity with an ongoing existence.It mostly seems stuck on the same issues in reasoning and common sense that AI has had problems with for at least fifty years.I am going to first list the four such general topic areas of predictions that I notice, along with a brief assessment of where I think they currently stand. Here the idea is that we will build autonomous agents that operate much like beings in the world.This has always been my own motivation for working in robotics and AI, but the recent successes of AI are not at all like this.I would never have started working on Artificial Intelligence if I did not believe that.However perhaps we humans are just not smart enough to figure out how to do this–see my remarks on humility in my post on the current state of Artificial Intelligence suitable for deployment in robotics.I think that is confusing, and just as the natives of San Francisco do not refer to their city as “Frisco”, no serious researchers in AI refer to “an AI”.] B. Thisrefers to the idea that eventually an AI based intelligent entity, with goals and purposes, will be better at AI research than us humans are.Then, with an unending Moore’s law mixed in making computers faster and faster, Artificial Intelligence will take off by itself, and, as in speculative physics going through the singularity of a black hole, we have no idea what things will be like on the other side.