Post #266,709
9/5/06 11:47:27 AM
|

Explanation
Supermassive parallelism and vast data bandwidth.
We simply don't hook billions of slow CPUs together with a hugely efficient interconnect fabric, like the human brain does, cuz we can't.
Yet.
Peter [link|http://www.no2id.net/|Don't Let The Terrorists Win] [link|http://www.kuro5hin.org|There is no K5 Cabal] [link|http://guildenstern.dyndns.org|Home] Use P2P for legitimate purposes! [link|http://kevan.org/brain.cgi?pwhysall|A better terminal emulator]
|
Post #266,792
9/6/06 12:12:45 AM
|

We can - but it wouldn't run C well
So we don't.
However, Henry Fuchs did just that with the early PixelPlane architectures.
[link|http://accad.osu.edu/~waynec/history/lesson5.html|http://accad.osu.edu...tory/lesson5.html]
[link|http://www.blackbagops.net|Black Bag Operations Log]
[link|http://www.objectiveclips.com|Artificial Intelligence]
[link|http://www.badpage.info/seaside/html|Scrutinizer]
|
Post #266,798
9/6/06 4:31:53 AM
|

Would love to see the rough sketches,
ideas for the microcode in the consciousness module, the first 500 routines for the Aha! reflex and maybe.. Vols I and II of the proposed handler for the occasional evanescent impression (and its discretion plug-in natch.)
But what I'd really Really like to see the early flowcharts for, is: the ineffable overlay.
Anything less? just another adding machine -- something Authoritative -- that could write tickets for proceeding through a Stop sign at 0.0013 kph == Against the Rules, ergo Not-good ... Is-bad == Punish.
(Still... if this work keeps a few restless minds away from further work on the Q-bomb.. wtf - give 'em a few 10-billions/yr for a while; at least until we get past the unpleasantness over ~ Why can't a Wog be more like a Gringo? Y'know?)
|
Post #266,803
9/6/06 9:13:52 AM
|

Double (or more) standard
When you can show me recognition of the ineffable in more than 25% of the population, then maybe I'll use that as a standard for A.I. Until then you sound like you're looking for something more human than humans.
===
Purveyor of Doc Hope's [link|http://DocHope.com|fresh-baked dog biscuits and pet treats]. [link|http://DocHope.com|http://DocHope.com]
|
Post #266,850
9/6/06 5:55:08 PM
9/6/06 6:00:57 PM
|

Perhaps we are missing a point here -
which you just made, but didn't notice (?)
Yes - if what you want is YAN robotic machine which can do a few more things sans human supervision: carrion. (They Will - there's VC $$ for eliminating human workfare, in a spread-sheet-defined 'World'.)
Since it's only the (.001-to-1%) who ever accomplish anything notable/useful, while we don't remotely understand any Aspect of their special qualities - why, there's one massive Ignorance-flag for The Designer. (No revelation that many humans go through the complete birth-death cycle by rote, sans a single original thought, as you suggest.) Machine-like behavior; present in every corporate slogan and forming the grammar of that familiar disingenuous patois we have so come to despise. What a legacy!
So which "AI" is AI going to settle for?
Personally, I think that the best our organizations (and their motivation$) could ever produce.. is apt to be of <class=dumbth scale=refined>. The best of our intellects won't be modelling their self-replication any Eon soon IMO. I'll give odds.. or, when floated: sell short.
oTpy

Edited by Ashton
Sept. 6, 2006, 06:00:57 PM EDT
|
Post #266,852
9/6/06 6:02:50 PM
|

No, it'll happen
Idiot parents sometimes raise genius kids ... by accident, I guess. Once we've got the A.I. that can produce the most efficient design of its own successor, then all bets are off and SkyNet is real.
We all rate "self awareness" as somehow indicative of intelligence. But pure dumb chance and lots of trial and error came up with the design for us. Electronic "generations" can operate on a much faster scale. Who's to say what that'll produce?
And yes, I noticed that point.
===
Purveyor of Doc Hope's [link|http://DocHope.com|fresh-baked dog biscuits and pet treats]. [link|http://DocHope.com|http://DocHope.com]
|
Post #266,919
9/7/06 11:48:19 AM
|

See Charles Stross, Accelerando, Economics 2.0
|
Post #266,992
9/7/06 11:29:15 PM
|

See also "Dial F for Frankenstein"
Reproduced here in [link|http://www.cybered.co.th/eng/DIAL.html|hideous hypertext.]
cordially,
Die Welt ist alles, was der Fall ist.
|
Post #267,004
9/8/06 2:31:03 AM
|

I was is thinking... (upon re-reading that)
Frankenstein's Monster Sings Christmas Carols while roasting chesnuts on an open fire, accompanied by Roger Whittaker and Vanilla Ice.
Seems about right.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwetheyFreedom is not FREE. Yeah, but 10s of Trillions of US Dollars? SELECT * FROM scog WHERE ethics > 0;
0 rows returned.
|
Post #267,034
9/8/06 11:44:57 AM
|

The thing about Frankenstein's monster...
...is that the "monster" was basically good at heart until rejected as an abomination by its own creator and others around it. Even then, it only sought revenge against its own creator.
Hurt me if you must, but let the duckie go!
|
Post #267,041
9/8/06 12:26:31 PM
|

Ummm, you definitely missed that point.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwetheyFreedom is not FREE. Yeah, but 10s of Trillions of US Dollars? SELECT * FROM scog WHERE ethics > 0;
0 rows returned.
|
Post #267,043
9/8/06 12:36:01 PM
|

Naw, I went away at a tangent.
That was just something that always bothered me whenever people would bring up Frankenstein as the ultimate "OMG Technology EVIL!" book.
Hurt me if you must, but let the duckie go!
|
Post #267,061
9/8/06 4:18:38 PM
|

No... I was think about the irony of the ....
AI thingy and the fact that Abby something trying to sing with grunts and groans and screeches (due to the fire) vs. the Ultimate Evil...
Though with Roger Whittaker and Vanilla Ice makes it a trilogy of Evul from teh MOOSic wurld.
Sheesh, you really do have to get those neck-bolts adjusted. And soon.
-- [link|mailto:greg@gregfolkert.net|greg], [link|http://www.iwethey.org/ed_curry|REMEMBER ED CURRY!] @ iwetheyFreedom is not FREE. Yeah, but 10s of Trillions of US Dollars? SELECT * FROM scog WHERE ethics > 0;
0 rows returned.
|
Post #267,045
9/8/06 1:40:43 PM
|

Who is the target audience?
That glossary is insane.
===
Purveyor of Doc Hope's [link|http://DocHope.com|fresh-baked dog biscuits and pet treats]. [link|http://DocHope.com|http://DocHope.com]
|
Post #266,825
9/6/06 1:07:06 PM
|

I expect in the long term the widespread adoption of C . . .
. . will come to be seen as one of the greatest mistakes in the development of computers.
I remember a comment from an inteview in Computer Languages ("The magazine for those who program in more than two languages"). When the intervied luminary was asked, "If major development companies are using languages like Modula why don't we hear more endorsements? All we hear about is C."
The response was, "These companies are reluctant to issue endorsements because they consider their non-use of C to be a competative trade secret".
[link|http://www.aaxnet.com|AAx]
|
Post #266,826
9/6/06 1:46:28 PM
|

Was that the LISP guys who did the Yahoo! stores?
===
Purveyor of Doc Hope's [link|http://DocHope.com|fresh-baked dog biscuits and pet treats]. [link|http://DocHope.com|http://DocHope.com]
|
Post #266,848
9/6/06 5:44:49 PM
|

You are not alone there
At PARC we estimated that we could get about a factor of 5 from special low level (HW+firmware) design. If Moore's Law is doubling every 18 months, then this is almost 4 years of being ahead if you can compete with ordinary silicon (your factor of 8 would be 3 turns of Moore's Law, or about 4.5 to 5 years). The Alto was a big success because it took Chuck Thacker and two technicians only about 3.5 *months* to make the first machine, and it only took another month to move the first Smalltalk over from the NOVA to the Alto. So we were off and running.
If we believe Chuck's estimate that we've lost about a factor of a thousand in efficiency from the poor design choices (in many dimensions) of Intel and Motorola (and Chuck is a very conservative estimator), then this is 10 doublings lost, or about 180 months, or about *15 years* for Moore's Law to catch up to a really good scheme. This is a good argument for trying very different architectures that allow a direct target to be highly efficient VHLL *system* execution.
A small group approaching this should try to do everything with modern CAD and avoid getting messed up with intricate packaging problems of many different types. So I would look at one of the modern processes that allows CPU and memory logic to be on the same die and try to make what is essentially an entire machine on that die (especially with regard to how processing, memories and switching relate). Just how the various parallelisms trade off these days and what is on and off chip would be interesting to explore. A good motivator from some years ago (so it would be done a little differently today) is Henry Fuch's "pixel planes" architecture for making a renderer as a smart memory system that has a processor for each pixel. Such a system can be have a slower clock and still beat the pants off a faster single processor von Neumann type architecture.
-Alan Kay [link|http://lists.squeakfoundation.org/pipermail/squeak-dev/2003-March/055371.html|http://lists.squeakf...March/055371.html]
[link|http://www.blackbagops.net|Black Bag Operations Log]
[link|http://www.objectiveclips.com|Artificial Intelligence]
[link|http://www.badpage.info/seaside/html|Scrutinizer]
|