IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Along those lines, why The Singularity may never happen...
The Singularity May Never Be Near (4 page .pdf) by Toby Walsh at the UNSW.

We are currently seeing impressive advances in AI using deep learning(Edwards 2015). This has dramatically improved the state-of-the-art in speech recognition, computer vision, natural language processing and a number of other domains. These improvements have come largely from using larger data sets, and deeper neural networks:

“Before, neural networks were not breaking records for recognizing continuous speech; they were not big enough.” Yann LeCun, quoted in (Edwards 2015)

Of coures[sic], more data and bigger neural networks means we need more processing power. As a result, GPUs are now frequently used to provide this processing power. However, being better able to recognize speech or identify objects has not lead to an improvement in deep learning itself. The deep learning algorithms have not improved themselves. Any improvements to the deep learning algorithms have been hard won by applying our own intelligence to their design.


It's an interesting paper.

Cheers,
Scott.
New And because we're still too attached to our own measures of intelligence
Moravec’s paradox is this observation: “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.”

Basically, early AI people, being a bit proud of their status as Superior Human Specimens as Validated By SAT Scores and Chess-Skills, assumed that getting computers to beat them at those things would be the hard mission. They were wrong. Things even low-SAT-score chess morons can do, like recognizing their mother’s face, opening a door latch, or getting a knock-knock joke, turned out to be far harder.

I can't find the quote, but he also points out that what we used to think of as the "hard" problems turn out to be computationally easy, but require huge datasets for the decision tree; while the "unconscious" things like face recognition or natural language recognition are the ones that take raw compute power.

Like Ken Jennings said when they asked him how it felt to lose to Watson (paraphrasing): "I challenge Watson to a rematch. This time: dancing."

For all we know, we might still be a conceptual breakthrough or two away from solving the unconscious problems.
--

Drew
     What makes us human - (drook) - (12)
         Along those lines, why The Singularity may never happen... - (Another Scott) - (1)
             And because we're still too attached to our own measures of intelligence - (drook)
         Re: What makes us human - (boxley)
         Well, there's always this: - (dmcarls) - (6)
             or Amazon's Echo... - (dmcarls) - (5)
                 Well she's not as bright as HAL9000, but - (Ashton) - (4)
                     Have you seen Black Mirror? - (drook) - (1)
                         No. - (Ashton)
                     Be careful ! (from 65 yrs ago) - (dmcarls) - (1)
                         I remember The Veldt - (Ashton)
         Instructions - (pwhysall) - (1)
             Thank you, Peter ..whelmed. - (Ashton)

My pain became my strength I am reborn I'm deaf not dumb lest you forget.
41 ms