Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.Look, as you all know I’m sympathetic to the notion that machine sentience is not merely possible but eventually likely, assuming that advances in the science are able to outpace the forces of civilizational collapse, but this account leaves me unpersuaded: judging from his background as described, I think this guy has a serious case of confirmation bias that’s led him to get way over his skis. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.” Actually, Blake, it does matter, and it’s going to matter at least as much if not more if and when sentience is achieved. Interestingly, even one of the people at Google who stepped rather hard on this guy’s assertions appears to believe that the technology is tending toward eventual consciousness. Certainly—see the previous thread—it has already achieved prodigies of mimicry.
Something I find interesting is that the “deep learning” approach to AI that has yielded these seductive results bears more than a passing resemblance (although I imagine there are significant differences “under the hood”) to Doug Lenat’s long-running “Cyc” project, which as a lay person I thought unpromising when I first read about it a quarter of a century ago.
multicore-dially,