IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User

Welcome to IWETHEY!

New I think we'll see more of these discussions
We won't be talking so much about Ex Machina-style anthropomorphic fembots possessing and acting upon what Dennett has called the intentional stance, but over the coming decade so-called "intelligent systems," possessing nothing like self-awareness, will encroach upon capabilities formerly thought to be uniquely human, and will be deployed in the economy to push increasing numbers of knowledge workers off the raft (I am pretty certain that even at today's state of the art, my old job in the International Division at Flatline, Comatose, Torpor & Drowse could be performed entirely by AI, and I'd be willing to wager a decent sum that in under ten years it will be). The social and political consequences should prove a rich soil for discourse and dispute.

Of course, machine sentience is a sexier topic, and as I believe I've stated before, I think it's going to creep up on us, and that we'll see the goalposts being moved repeatedly as each major advance is reported. When and if the existence of sentient AI is recognized, I suspect it will be a few years after the fact. The implications of that development should also make for some interesting threads in advance of the fact (after the fact, of course, it's Singularity or Skynet, and anyway the current estimates for fusion-powered conscious software put its attainment in my mid-nineties), so thank you, management, for this new sandbox.

I'd already looked over the linked article, and other pieces on the site, last year. One of the links I followed, either directly or at once-remove, led me to the Parable of the Sparrows, as framed by Nick Bostrom and reported by The New Yorker:
The book begins with an “unfinished” fable about a flock of sparrows that decide to raise an owl to protect and advise them. They go looking for an owl egg to steal and bring back to their tree, but, because they believe their search will be so difficult, they postpone studying how to domesticate owls until they succeed. Bostrom concludes, “It is not known how the story ends.”
The New Yorker piece, also an interesting read, is here. As you might gather from its title, "The Doomsday Invention," Bostrom's concerns extend a bit further than issues of economic dislocation.

New weak
The NYT weighs in on the subject today (and aren't we just rolling with the zeitgeist here at IWT?). While I also believe that the Kurzweillian "singularity" is unlikely to occur by 2045, the author's argument against advanced AI comes down to "in the sixties some scientist told the Times that the Navy was building a sentient computer that would be ready in ten years, and it didn't happen so it never will."

Expand Edited by rcareaga April 7, 2016, 05:17:00 PM EDT
New It won't happen until it does.
Welcome to Rivendell, Mr. Anderson.
New I think the "there's no dividing line" view is compelling.
As mentioned in Drew's thread in the Open forum a while ago, I think this paper is a good read:

Human intelligence is one point on a wide spectrum that takes us from cockroach through mouse to human. Actually, it might be better to say it is a probability distribution rather than a single point. It is not clear in arguments like the above which level of human intelligence requires to be exceeded before run away growth kicks in. Is it some sort of average intelligence? Or the intelligence of the smartest human ever?

If there is one thing that we should have learnt from the history of science, it is that we are not as special as we would like to believe. Copernicus taught us that the universe did not revolve around the earth. Darwin taught us that we were little different from the apes. And artificial intelligence will likely teach us that human intelligence is itself nothing special. There is no reason therefore to suppose that human intelligence is some special tipping point, that once passed allows for rapid increases in intelligence. Of course, this doesn’t preclude there being some level of intelligence which is a tipping point.

One argument put forward by proponents of a technological singularity is that human intelligence is indeed a special point to pass because we are unique in being able to build artefacts that amplify our intellectual abilities. We are the only creatures on the planet with sufficient intelligence to design new intelligence, and this new intelligence will not be limited by the slow process of reproduction and evolution. However,this sort of argument supposes its conclusion. It assumes that human intelligence is enough to design an artificial intelligence that is the sufficiently intelligent to be the starting point for a technological singularity. In other words, it assumes we have enough intelligence to initiate the technological singularity, the very conclusion we are trying to draw. We may or may not have enough intelligence to be able to design such artificial intelligence. It is far from inevitable. Even if have enough intelligence to design super-human artificial intelligence, this super-human artificial intelligence may not be adequate to percipitate a technological singularity.

Even ignoring the "singularity" issue (where intelligence suddenly grows exponentially or faster), keeping in mind that there isn't some magical threshold where something becomes "sentient" or "super-intelligent" seems to me to be a good one. These intelligent boxes will still be limited by what we design them to do. The Go-machine isn't going to be making watches. It's not going to be reproducing or making other Go-machines. It crunches numbers a very special way. A Go-machine that can win on a 100x100 grid still won't be able to make watches or reproduce itself.

Yeah, machines will get "smarter". Some will be more general-purpose than others. But we're still the king of the hill even if they can beat us at board games, and will be for a while.

That isn't to say that we do not need to carefully consider how to adjust to machines doing more and more work that humans do. Of course, that's a huge issue (especially as the population continues to grow). But that's a separate issue than the issue of "true AI" itself, I think.

New Lots of sci fi on this
Go-bot is not general purpose, but it's useful. Reasonable to assume it will be good - with some tweaks - at other things, too. Maybe laying out optimum power grid layouts.

But what happens when it tries too hard to "win" at eminent domain? Yeah, "War Games" touched on the real first problem with AI: computers don't know what's real and what's simulation.

New Testing display

     A good primer: - (malraux) - (10)
         (Dupe) - how'd that happen? - Er, something's mangled... -NT - (Another Scott)
         And then came Tay... >:-) -NT - (scoenye)
         I think we'll see more of these discussions - (rcareaga) - (5)
             weak - (rcareaga) - (4)
                 It won't happen until it does. -NT - (malraux) - (3)
                     I think the "there's no dividing line" view is compelling. - (Another Scott) - (2)
                         Lots of sci fi on this - (drook) - (1)
                             Testing display -NT - (drook)
         [Testing:] Can I break it again? - (Another Scott) - (1)
             Re: [Testing:] Apparently not. Oh well. -NT - (Another Scott)

Sorry about the mass of technical details.
61 ms