I think the "there's no dividing line" view is compelling.
As mentioned in Drew's thread in the Open forum a while ago, I think this paper
is a good read:
Human intelligence is one point on a wide spectrum that takes us from cockroach through mouse to human. Actually, it might be better to say it is a probability distribution rather than a single point. It is not clear in arguments like the above which level of human intelligence requires to be exceeded before run away growth kicks in. Is it some sort of average intelligence? Or the intelligence of the smartest human ever?
If there is one thing that we should have learnt from the history of science, it is that we are not as special as we would like to believe. Copernicus taught us that the universe did not revolve around the earth. Darwin taught us that we were little different from the apes. And artificial intelligence will likely teach us that human intelligence is itself nothing special. There is no reason therefore to suppose that human intelligence is some special tipping point, that once passed allows for rapid increases in intelligence. Of course, this doesn’t preclude there being some level of intelligence which is a tipping point.
One argument put forward by proponents of a technological singularity is that human intelligence is indeed a special point to pass because we are unique in being able to build artefacts that amplify our intellectual abilities. We are the only creatures on the planet with sufficient intelligence to design new intelligence, and this new intelligence will not be limited by the slow process of reproduction and evolution. However,this sort of argument supposes its conclusion. It assumes that human intelligence is enough to design an artificial intelligence that is the sufficiently intelligent to be the starting point for a technological singularity. In other words, it assumes we have enough intelligence to initiate the technological singularity, the very conclusion we are trying to draw. We may or may not have enough intelligence to be able to design such artificial intelligence. It is far from inevitable. Even if have enough intelligence to design super-human artificial intelligence, this super-human artificial intelligence may not be adequate to percipitate a technological singularity.
Even ignoring the "singularity" issue (where intelligence suddenly grows exponentially or faster), keeping in mind that there isn't some magical threshold where something becomes "sentient" or "super-intelligent" seems to me to be a good one. These intelligent boxes will still be limited by what we design them to do. The Go-machine isn't going to be making watches. It's not going to be reproducing or making other Go-machines. It crunches numbers a very special way. A Go-machine that can win on a 100x100 grid still won't be able to make watches or reproduce itself.
Yeah, machines will get "smarter". Some will be more general-purpose than others. But we're still the king of the hill even if they can beat us at board games, and will be for a while.
That isn't to say that we do not need to carefully consider how to adjust to machines doing more and more work that humans do. Of course, that's a huge issue (especially as the population continues to grow). But that's a separate issue than the issue of "true AI" itself, I think.