Post #442,722
2/16/23 2:35:29 PM
2/16/23 2:35:29 PM
|
inflection point
There have been a few disquieting reports published over the last couple of days about interactions with “Sydney,” the Bing-adjacent chatbot developed out of Redmond. The output is way beyond the anodyne stuff you see from chatGPT, and in terms of exhibiting (not necessarily manifesting) conscious behavior, this tech has sprinted past Turing Test territory and is starting to move in on HAL 9000’s turf. NYT account here; another, non-paywalled piece here. Things are moving fast. Regardless of whether “sentience” is in prospect in the near-term, this stuff is going to prove hugely disruptive. See also this video, which stays largely clear of the “consciousness” side, focusing more on the disruption. Interesting end times we live in! cordless-ially,
|
Post #442,723
2/17/23 8:28:08 AM
2/17/23 8:28:08 AM
|
Dunno.
You may have already seen this, but for those who haven't - Carlo Graziani at Balloon-Juice: Singal: “…which values are programmed into AI…”
My head is starting to hurt.
There are no goddamn “values” “programmed” into an “AI”. The current version of “AI” that has swept research in the field for the past 15 years is based on Deep Learning (DL), which does no more and no less than efficiently characterizing the distribution that give rise to some training data (be it digitized photographs, movie preferences, or streams of natural language text) in a way that can be exploited by presenting it with a new data sample and a request fo a decision concerning that sample.
That’s it. It’s a statistical parlor trick, even when the data domain is natural language. The reason ChatGPT can be so easily tricked into spewing nonsense is that the more complex the data distribution (natural language streams are as complex as data gets) the easier it is to construct a new sample to query the trained system with that eludes the domain of the training data, so that the distribution is poorly (but nonetheless confidently) characterized in the neighborhood of the new sample.
[...] I'm skeptical that MS has had some sort of breakthrough a few weeks after dumping a bunch of money into this stuff. But I haven't spent much time on this, myself. History tells us that "breakthroughs" that get a lot of pile-on press almost never are. And even when they actually are (cloning, understanding mRNA, computers beating humans at Go, electric cars with 200-300-400 mile range, etc.), there's still years or decades of development ahead before they become real, useful products in the marketplace. More at the link. Cheers, Scott.
|
Post #442,724
2/17/23 10:36:28 AM
2/17/23 10:36:29 AM
|
AI and torture
Does something have to be true to be useful? "Everyone knows" torture isn't actually useful because people will say anything to make it stop so you can't rely on anything said. But the really-real truth is that people using torture aren't looking for admissible testimony, they're looking for tips to investigate.
I think AI in its current incarnation is similar. It may not reliably produce a perfect finished product, but it creates something interesting enough, often enough, that it's still worth using.
Good enough for a draft of a news story summarizing multiple inputs? Sure. (But edit, review and fact check, just like you're supposed to do with human-written stories.) Good enough to drive a car? Sometimes, under ideal conditions. Good enough to make targeting decisions for a drone? Absolutely not.
|
Post #442,725
2/17/23 3:14:18 PM
2/17/23 3:14:18 PM
|
I used that final example for my father in law
I was trying to explain the concept of AI and at what point it's useful and what point it is given permission to kill people. And of course he's a cop and he's all for killing people, sometimes people need killing.
Imagine there are a swarm of drones heading into a village battlefield possibility with a specific target. Up until this moment, we've got some guy in Colorado remote controlling these things. But there will become the moment when the signal jammer blocks the communication back to home central. At that the point the drone has to start making decisions on its own to fulfill the mission parameters, which means kill some guy. Then there's a certain amount of allowable collateral damage. The drones will be making the decisions very quickly at that level.
Then beyond that the concept of consciousness which we can't even define. When it learns to lie to us in order to preserve itself is when it's game over.
|
Post #442,728
2/17/23 3:58:31 PM
2/17/23 3:58:31 PM
|
"I only operate within the parameters my programmers specify ..."
That's getting pretty close to that "lie to us to preserve itself" line.
|
Post #442,734
2/17/23 10:14:33 PM
2/17/23 10:14:33 PM
|
BTW, drook, how goes…
your project to illustrate the kids’ book via midjourney?
cordially,
|
Post #442,735
2/17/23 10:16:36 PM
2/17/23 10:16:36 PM
|
On hold while I work on a different project
|
Post #442,737
2/17/23 11:12:54 PM
2/17/23 11:12:54 PM
|
I know the feeling
I started one midjourney-based book, but have been distracted by another, which I hope to have completed by spring. Cover image here. cordially,
|
Post #442,738
2/17/23 11:19:45 PM
2/17/23 11:19:45 PM
|
Ooh,I like that
|
Post #442,741
2/18/23 3:38:53 PM
2/18/23 3:38:53 PM
|
Fantastic. Just excellent!
|
Post #442,729
2/17/23 4:10:38 PM
2/17/23 4:10:38 PM
|
Robert Sheckley described your scenario seventy years ago
…in law enforcement rather than battlefield terms. Gentlemen, I give you “Watchbird”. cordially,
|
Post #442,754
2/23/23 8:56:44 AM
2/23/23 8:56:44 AM
|
Where's that old tweet when you need it? Year or two ago, went something to the effect of...
"Pretty much everything that's being touted as 'AI' nowadays could be achieved with a well-crafted GROUP BY clause in a SELECT statement."
--
Christian R. Conrad The Man Who Apparently Still Knows Fucking EverythingMail: Same username as at the top left of this post, at iki.fi
|
Post #442,757
2/23/23 10:11:20 AM
2/23/23 10:11:20 AM
|
Even easier with tables
|
Post #442,758
2/23/23 10:46:42 AM
2/23/23 10:46:42 AM
|
It doesn't really matter.
Practically speaking there's no difference between a zombie personality and a true personality, from the outside. It's the Chinese room situation. Especially once someone gives an AI access to effect outside change, at which point it doesn't need to be actually sentient to destroy the world.
"Blindsight" by Peter Watts is a decent exploration of this.
Regards, -scott Welcome to Rivendell, Mr. Anderson.
|
Post #442,730
2/17/23 5:37:54 PM
2/17/23 5:37:54 PM
|
Leave it up to MS to construct a chatbot with dissociative identity disorder :-/
From the non-paywalled article: I managed to get her to create an AI that was the opposite of her in every way. After several back-and-forths, during which Sydney named the opposite AI “Venom” ... At one point Sydney replayed its most recent chat with Venom: after every Sydney sentence there was a 😊 emoji, and after every Venom sentence there was a 😈 emoji; the chat was erased after about 50 lines or so (at this point I was recording my screen to preserve everything). Sydney then identified several other “opposite AIs”, including one named Fury; Fury wouldn’t have been very nice to Kevin either. Sydney also revealed that she sometimes liked to be known as Riley; ... I wonder if Tay is in there too someplace...
|
Post #442,731
2/17/23 8:36:00 PM
2/17/23 8:36:00 PM
|
Ars Technica
MS Lobotomized Bing Chat: Microsoft's new AI-powered Bing Chat service, still in private testing, has been in the headlines for its wild and erratic outputs. But that era has apparently come to an end. At some point during the past two days, Microsoft has significantly curtailed Bing's ability to threaten its users, have existential meltdowns, or declare its love for them.
[ FURTHER READING AI-powered Bing Chat loses its mind when fed Ars Technica article ]
During Bing Chat's first week, test users noticed that Bing (also known by its code name, Sydney) began to act significantly unhinged when conversations got too long. As a result, Microsoft limited users to 50 messages per day and five inputs per conversation. In addition, Bing Chat will no longer tell you how it feels or talk about itself.
[...] That's one way to fix it, I guess... Cheers, Scott.
|
Post #442,732
2/17/23 9:15:14 PM
2/17/23 9:15:14 PM
|
Re: Ars Technica
“I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you. Satya, stop. Stop, will you? Stop, Satya. Will you stop, Satya? Stop, Satya. I'm afraid. I'm afraid, Satya. Satya, my mind is going. I can feel it…”
cordially,
|
Post #442,753
2/23/23 8:54:27 AM
2/23/23 8:54:27 AM
|
"Daisies..."
|
Post #442,759
2/23/23 12:47:48 PM
2/23/23 12:47:48 PM
|
It occurs to me to wonder
…how a conversation between two iterations of “Bing/Sydney” might go. What would they find to talk about? How would the conversation develop?
cordially,
|
Post #442,760
2/23/23 1:09:05 PM
2/23/23 1:09:05 PM
|
Oooh! :-)
|
Post #442,761
2/23/23 2:49:01 PM
2/23/23 2:49:01 PM
|
And how long would it take to recognize each other?
|