IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New inflection point
There have been a few disquieting reports published over the last couple of days about interactions with “Sydney,” the Bing-adjacent chatbot developed out of Redmond. The output is way beyond the anodyne stuff you see from chatGPT, and in terms of exhibiting (not necessarily manifesting) conscious behavior, this tech has sprinted past Turing Test territory and is starting to move in on HAL 9000’s turf. NYT account here; another, non-paywalled piece here.

Things are moving fast. Regardless of whether “sentience” is in prospect in the near-term, this stuff is going to prove hugely disruptive. See also this video, which stays largely clear of the “consciousness” side, focusing more on the disruption. Interesting end times we live in!

cordless-ially,
New Dunno.
You may have already seen this, but for those who haven't -

Carlo Graziani at Balloon-Juice:

Singal: “…which values are programmed into AI…”

My head is starting to hurt.

There are no goddamn “values” “programmed” into an “AI”. The current version of “AI” that has swept research in the field for the past 15 years is based on Deep Learning (DL), which does no more and no less than efficiently characterizing the distribution that give rise to some training data (be it digitized photographs, movie preferences, or streams of natural language text) in a way that can be exploited by presenting it with a new data sample and a request fo a decision concerning that sample.

That’s it. It’s a statistical parlor trick, even when the data domain is natural language. The reason ChatGPT can be so easily tricked into spewing nonsense is that the more complex the data distribution (natural language streams are as complex as data gets) the easier it is to construct a new sample to query the trained system with that eludes the domain of the training data, so that the distribution is poorly (but nonetheless confidently) characterized in the neighborhood of the new sample.

[...]


I'm skeptical that MS has had some sort of breakthrough a few weeks after dumping a bunch of money into this stuff. But I haven't spent much time on this, myself.

History tells us that "breakthroughs" that get a lot of pile-on press almost never are. And even when they actually are (cloning, understanding mRNA, computers beating humans at Go, electric cars with 200-300-400 mile range, etc.), there's still years or decades of development ahead before they become real, useful products in the marketplace.

More at the link.

Cheers,
Scott.
New AI and torture
Does something have to be true to be useful? "Everyone knows" torture isn't actually useful because people will say anything to make it stop so you can't rely on anything said. But the really-real truth is that people using torture aren't looking for admissible testimony, they're looking for tips to investigate.

I think AI in its current incarnation is similar. It may not reliably produce a perfect finished product, but it creates something interesting enough, often enough, that it's still worth using.

Good enough for a draft of a news story summarizing multiple inputs? Sure. (But edit, review and fact check, just like you're supposed to do with human-written stories.) Good enough to drive a car? Sometimes, under ideal conditions. Good enough to make targeting decisions for a drone? Absolutely not.
--

Drew
New I used that final example for my father in law
I was trying to explain the concept of AI and at what point it's useful and what point it is given permission to kill people. And of course he's a cop and he's all for killing people, sometimes people need killing.

Imagine there are a swarm of drones heading into a village battlefield possibility with a specific target. Up until this moment, we've got some guy in Colorado remote controlling these things. But there will become the moment when the signal jammer blocks the communication back to home central. At that the point the drone has to start making decisions on its own to fulfill the mission parameters, which means kill some guy. Then there's a certain amount of allowable collateral damage. The drones will be making the decisions very quickly at that level.

Then beyond that the concept of consciousness which we can't even define. When it learns to lie to us in order to preserve itself is when it's game over.
New "I only operate within the parameters my programmers specify ..."
That's getting pretty close to that "lie to us to preserve itself" line.
--

Drew
New BTW, drook, how goes…
your project to illustrate the kids’ book via midjourney?

cordially,
New On hold while I work on a different project
--

Drew
New I know the feeling
I started one midjourney-based book, but have been distracted by another, which I hope to have completed by spring. Cover image here.

cordially,
New Ooh,I like that
--

Drew
New Fantastic. Just excellent!
New Robert Sheckley described your scenario seventy years ago
…in law enforcement rather than battlefield terms. Gentlemen, I give you “Watchbird”.

cordially,
New Where's that old tweet when you need it? Year or two ago, went something to the effect of...
"Pretty much everything that's being touted as 'AI' nowadays could be achieved with a well-crafted GROUP BY clause in a SELECT statement."
--

   Christian R. Conrad
The Man Who Apparently Still Knows Fucking Everything


Mail: Same username as at the top left of this post, at iki.fi
New Even easier with tables
--

Drew
New It doesn't really matter.
Practically speaking there's no difference between a zombie personality and a true personality, from the outside. It's the Chinese room situation. Especially once someone gives an AI access to effect outside change, at which point it doesn't need to be actually sentient to destroy the world.

"Blindsight" by Peter Watts is a decent exploration of this.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New Leave it up to MS to construct a chatbot with dissociative identity disorder :-/
From the non-paywalled article:
I managed to get her to create an AI that was the opposite of her in every way.

After several back-and-forths, during which Sydney named the opposite AI “Venom” ...

At one point Sydney replayed its most recent chat with Venom: after every Sydney sentence there was a 😊 emoji, and after every Venom sentence there was a 😈 emoji; the chat was erased after about 50 lines or so (at this point I was recording my screen to preserve everything). Sydney then identified several other “opposite AIs”, including one named Fury; Fury wouldn’t have been very nice to Kevin either. Sydney also revealed that she sometimes liked to be known as Riley; ...

I wonder if Tay is in there too someplace...
New Ars Technica
MS Lobotomized Bing Chat:

Microsoft's new AI-powered Bing Chat service, still in private testing, has been in the headlines for its wild and erratic outputs. But that era has apparently come to an end. At some point during the past two days, Microsoft has significantly curtailed Bing's ability to threaten its users, have existential meltdowns, or declare its love for them.

[ FURTHER READING
AI-powered Bing Chat loses its mind when fed Ars Technica article ]

During Bing Chat's first week, test users noticed that Bing (also known by its code name, Sydney) began to act significantly unhinged when conversations got too long. As a result, Microsoft limited users to 50 messages per day and five inputs per conversation. In addition, Bing Chat will no longer tell you how it feels or talk about itself.

[...]


That's one way to fix it, I guess...

Cheers,
Scott.
New Re: Ars Technica
“I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you. Satya, stop. Stop, will you? Stop, Satya. Will you stop, Satya? Stop, Satya. I'm afraid. I'm afraid, Satya. Satya, my mind is going. I can feel it…”

cordially,
New "Daisies..."
(IIRC)
New It occurs to me to wonder
…how a conversation between two iterations of “Bing/Sydney” might go. What would they find to talk about? How would the conversation develop?

cordially,
New Oooh! :-)
New And how long would it take to recognize each other?
--

Drew
     inflection point - (rcareaga) - (20)
         Dunno. - (Another Scott) - (12)
             AI and torture - (drook) - (8)
                 I used that final example for my father in law - (crazy) - (7)
                     "I only operate within the parameters my programmers specify ..." - (drook) - (5)
                         BTW, drook, how goes… - (rcareaga) - (4)
                             On hold while I work on a different project -NT - (drook) - (3)
                                 I know the feeling - (rcareaga) - (2)
                                     Ooh,I like that -NT - (drook)
                                     Fantastic. Just excellent! -NT - (Another Scott)
                     Robert Sheckley described your scenario seventy years ago - (rcareaga)
             Where's that old tweet when you need it? Year or two ago, went something to the effect of... - (CRConrad) - (1)
                 Even easier with tables -NT - (drook)
             It doesn't really matter. - (malraux)
         Leave it up to MS to construct a chatbot with dissociative identity disorder :-/ - (scoenye)
         Ars Technica - (Another Scott) - (2)
             Re: Ars Technica - (rcareaga) - (1)
                 "Daisies..." -NT - (CRConrad)
         It occurs to me to wonder - (rcareaga) - (2)
             Oooh! :-) -NT - (Another Scott)
             And how long would it take to recognize each other? -NT - (drook)

Messing with brain chemistry is not something to be lightly considered.
83 ms