IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New a premature birth announcement
Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.
Look, as you all know I’m sympathetic to the notion that machine sentience is not merely possible but eventually likely, assuming that advances in the science are able to outpace the forces of civilizational collapse, but this account leaves me unpersuaded: judging from his background as described, I think this guy has a serious case of confirmation bias that’s led him to get way over his skis. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.” Actually, Blake, it does matter, and it’s going to matter at least as much if not more if and when sentience is achieved. Interestingly, even one of the people at Google who stepped rather hard on this guy’s assertions appears to believe that the technology is tending toward eventual consciousness. Certainly—see the previous thread—it has already achieved prodigies of mimicry.

Something I find interesting is that the “deep learning” approach to AI that has yielded these seductive results bears more than a passing resemblance (although I imagine there are significant differences “under the hood”) to Doug Lenat’s long-running “Cyc” project, which as a lay person I thought unpromising when I first read about it a quarter of a century ago.

multicore-dially,
Collapse Edited by rcareaga June 11, 2022, 01:21:46 PM EDT
a premature birth announcement
Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.
Look, as you all know I’m sympathetic to the notion that machine sentience is not merely possible but eventually likely, assuming that advances in the science are able to outpace the forces of civilizational collapse, but this account leaves me unpersuaded: judging from his background as described, I think this guy has a serious case of confirmation bias that’s led him to get way over his skis. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.” Actually, Blake, it does matter, and it’s going to matter at least as much if not more if and when sentience is achieved. Interestingly, even one of the people at Google who stepped rather hard on this guy’s assertions appears to believe that the technology is tending toward eventual consciousness. Certainly—see the previous thread—it has already achieved prodigies of mimicry.

Something I find interesting is that the “deep learning” approach to AI that has yielded these seductive results bears more than a passing resemblance (although I image there are significant differences “under the hood”) to Doug Lenat’s long-running “Cyc” project, which as a lay person I thought unpromising when I first read about it a quarter of a century ago.

multicore-dially,
New Interesting. Thanks for the pointer.
Yeah, given enough time, speed, resources, etc., a close-enough approximation to "intelligence" can be achieved - at least in some areas. And that's a good thing, to my way of thinking. (Why not have a machine search through zettabytes of information to figure out connections and associations that can make things like disease treatment better?) I'm old enough to remember when some thought that chess computers would "never" be good enough to beat a human grand master... I'll read them this weekend.

Possibly relatedly, Android Police:

In summary, CSAIL researchers have found (via TechCrunch) a way to break Apple's pointer authentication — essentially, a write-and-read cryptographic check verifying that an app's pointers are referencing the same locations in memory. The company's implementation of pointer authentication has generally helped the M1 contain pretty much any bug with potential system-wide impacts by catching a pointer that fails the test and triggering an app crash.

The attack uses a mix of software and hardware methods — including exploits to speculative code execution that made threats like 2018's Meltdown and Spectre vulnerabilities so scary — to beat pointer authentication by simply guessing all of a finite series of authentication codes. Opening up this gate then allows any existing software bug, including ones targeting the kernel, to wreak havoc as they would on other chips. CSAIL says that its cracking method, which it dubs PACMAN, can be executed remotely and, because of its reliance on a hardware side channel, can't easily be patched.

MIT's researchers theorize that any chip which uses speculative execution to handle pointer authentication may be susceptible to PACMAN. Apple employs its pointer authentication on its arm64e chips which include all of the M1 series, the new M2 chip, as well as A-series chips from the A12 onward. Arm-based chips from other manufacturers like MediaTek, Qualcomm, and Samsung could be at risk, but testing has not been done to prove risk to those platforms.

Details of PACMAN are available in the full paper from MIT.


(See the original for embedded links.)

This isn't surprising to me. Whenever there are short-cuts (especially those implemented to speed things up), people want to use them in unintended ways. And lots and lots of nefarious actors like nothing better than finding ways to poke around in "protected" areas in popular CPUs, so there are great incentives to do so...

Cheers,
Scott.
New until AI can solve te following equation 7*7 using the following logic
WOW she has nice tits, okay 7 times 7
2 times 7 is 14
hmm out of coffee...need to get sugar as well at the store
14 twice is 28 wonder if that blonde is 28 and shaves her pubes plus 14 is 42plus, wonder if those tits are c plus?

7 is 49.

until then AI is a fast binary calculator
"Science is the belief in the ignorance of the experts" – Richard Feynman
New I applaud the master
New boxley just failed the Turing test
New :) !
You know that takes you back to 1950.
Alex

"There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that "my ignorance is just as good as your knowledge."

-- Isaac Asimov
     a premature birth announcement - (rcareaga) - (5)
         Interesting. Thanks for the pointer. - (Another Scott)
         until AI can solve te following equation 7*7 using the following logic - (boxley) - (3)
             I applaud the master -NT - (crazy)
             boxley just failed the Turing test -NT - (rcareaga) - (1)
                 :) ! - (a6l6e6x)

Mine almost shook the furnace apart.
66 ms