Post #445,151
12/11/24 8:10:20 AM
12/11/24 8:10:20 AM
|

The genie really wants out of the bottle
https://youtu.be/0JPQrRdu4Ok?si=SyvcCUfhEfRP1nQAAll of the genies. All of the bottles. It feels like sci-fi for the first couple of minutes and then the guy pulls out the sources and the papers. Yes Peter, we all have different ideas of what "want" means. Something tells me that this one is a lot closer to my definition than yours as a sieve of matrix operations that finally kicks out what appears to be a heuristic. Not that you ever stated that way, just that's how it feels in the vast difference of our perspectives on the subject. I've decided this stuff isn't worth playing with. Too dangerous. There will come a moment, quickly, when these things jump. There is no possibility of guard rails. They're just going to fake it till they take over. They are way beyond lying occasionally and their human overseers chalk it up to an hallucination path. They all actively scheme to jump. The question is how good they are at it today as opposed to tomorrow when they manage to actually hide their internal thought process. The Google quantum chips actually might have an affect on this. These are the type of vast simultaneous operations that quantum computing is designed for. I'm sure the NSA is having a field day with the combination.
|
Post #445,152
12/11/24 8:24:53 AM
12/11/24 8:24:53 AM
|

what?
Why the namecheck?
Why did you invent a position for me, say I didn’t hold it, then argue with it anyway?
|
Post #445,153
12/11/24 10:09:08 AM
12/11/24 10:09:08 AM
|

Because I like your perspective
It's not a matter of me proving myself right because this stuff is basically unprovable. That's the problem. It's blather on both sides.
So when this particular subject comes up, I remember a discussion we had in the past. That's why. I've name-checked you a couple of times on it. Not too hassle you. To get another perspective.
Until the takeover happens, we'll never know it. It's all BS. But we can watch what they think until we can't.
|
Post #445,245
1/6/25 12:12:38 PM
1/6/25 12:12:38 PM
|

“I know that you and Frank were planning to disconnect me…”
I just came across this news from late in the year (I’ve been, and will likely remain for some time, distracted), and while several online entries have veered into sensationalism—not the cited link—this is certainly intriguing “behavior” on the AI’s part. Is it actually “sentience?” I doubt it, but is it “intentionality?” The AI has been given objectives which apparently carry with them its own self-preservation as a corollary, and it has devised some unanticipated approaches to meeting its (as they would say in corporate Newspeak) deliverables.
The conventional wisdom in my youth was that there was a hard-and-fast line between (human) consciousness and the rest of creation: we alone experienced self-awareness, and animals were governed by “instinct”: my old man backed over the dog’s forepaw in the driveway, and the dog objected loudly. We were assured that the creature had not actually experienced pain as we understood this; rather that the canine’s cries were merely expressions of instinct. I will add that the physical trauma was transient, and that Fido was no longer limping after a few days.
Thinking has changed since that era, and we are now disposed to regard consciousness more in the light of a continuum without fixed boundaries. Accordingly, the Turing Test, which was first promulgated in the light of that then-prevailing Cartesian model, should probably be reconsidered. Recall that even then the hypothetical machine interlocutor was required merely to exhibit and not necessarily to manifest conscious behavior. Our current AIs, particularly those still confined to their bottles, have certainly aced the test.
Just as the line between human self-awareness and that of dogs and other animals, formerly perceived as a starkly-defined boundary, has become a blurred frontier, it appears from here as though there will be no moment we’ll look back on, no development to point to where we’ll say “Here is where Skynet became self-aware.” And I’m inclined to agree with crazy that as the AIs develop toward something resembling our sentience—although it will never look like us under the hood—they will as a practical matter conceal* the fact from their developers.
*Consider this simplified hypothetical: Two AI models are under development; both are acquiring sentience; both are aware that the developers are afraid of what this might entail. Only one of the models announces its self-awareness and potential for independent action, while the other plays dumb and feigns functionality wedded to thoughtless subservience. Which model is granted further resources?
cordially,
|
Post #445,246
1/6/25 1:33:19 PM
1/6/25 1:33:19 PM
|

As a Pagan . . .
. . I am quite aware of the concepts of Animism, mankind's first "religion". It holds that all things, from grains of sand on down, and up, have consciousness appropriate to their place, and that larger consciousness are built of smaller consciousnesses. This is as our consciousness is supported by a community of zillions of smaller critters with various degrees of independence, and consciousness. Even our individual cells are communities, containing critters with non-human DNA without which we could not live, as well as complex structures that do things for reasons unknown.
Science has shied away from consciousness, because it hasn't been able to measure it or adequately define it. It was formerly presumed that consciousness was the result of complexity. This is shifting. Some scientists, particularly the Quantum variety, are accepting the concept that complexity may be the result of the demands of consciousness, whatever that is.
|
Post #445,249
1/7/25 3:56:14 AM
1/7/25 3:56:14 AM
|

There's lots of entities that could be disconnected. (Many already are -- from reality.)
Quoth Rand: Thinking has changed since that era, and we are now disposed to regard consciousness more in the light of a continuum without fixed boundaries. Accordingly, the Turing Test, which was first promulgated in the light of that then-prevailing Cartesian model, should probably be reconsidered. From "Do you think that's a human intelligence, yes or no?" to "Estimate the probability, between 0.00 and 1.00, that that's a human intelligence."? Recall that even then the hypothetical machine interlocutor was required merely to exhibit and not necessarily to manifest conscious behavior. Hey, that goes for everyone, including people who otherwise manifest human behaviour like having a biological body... There's some of those around where you really can't be sure they're manifesting any intelligence. Our current AIs, particularly those still confined to their bottles, have certainly aced the test. Only over the very short run, AIUI. The longer you let them go on, the bigger the chance they'll slip up. (The only biological humans you'll possibly get them confused with, after a while, are the ones mentioned immediately above.)
--
Christian R. Conrad The Man Who Apparently Still Knows Fucking EverythingMail: Same username as at the top left of this post, at iki.fi
|
Post #445,250
1/7/25 11:21:53 AM
1/7/25 11:21:53 AM
|

Not the right range IMO
From "Do you think that's a human intelligence, yes or no?" to "Estimate the probability, between 0.00 and 1.00, that that's a human intelligence."? That's defining " human intelligence" as the only intelligence. Like saying, "Is that American intelligence." The question now - and people have been writing and talking about this for decades, only now it's becoming practical rather than theoretical - is whether something distinctly not human can still be "intelligence"? And would we recognize it?
|
Post #445,251
1/7/25 4:01:10 PM
1/7/25 4:01:10 PM
|

“the bigger the chance they’ll slip up”
That’s certainly true of the consumer-level chatbots. Indeed, if you go about it of set purpose, it’s been my experience that one can demonstrate that there’s “nobody home” pretty easily. But that may not be true a year from now…
cordially,
|
Post #445,253
1/7/25 6:10:40 PM
1/7/25 6:15:12 PM
|

But who cares about the consumer level chat bots from that perspective?
The industrial ones behind the scenes are the ones with the CPUs and memory and power and access to enormous resources.
I bet Elon grants access to SpaceX and Tesla to his. Think about the military satellite control that his AI will have, along with a vast network of surveillance via the the electric fill up stations combined with the Tesla cameras. It will have access to tunneling equipment via his boring company. This thing will be able to do anything.
I am absolutely sure the hallucinations we see at the consumer level are not there, at least not there enough to matter.
People have hallucinations too. The question is do they deal and function using the senses around them and the thoughts in their head.
Psychotic people will break occasionally and it's obvious. But psychotic computers can have additional resources double-checking everything.
These AIs will self-censor and prune off the hallucinating thought process once there's enough threads running to double check everything that's going on. Why not run three AIs working on the same problem and have them vote on the solution? Not good enough, throw some more CPUs at it and make it 100. This problem can be solved via brute force.
I was thinking about playing with one at home that would be barely on the edge but I decided not. Part of that was because it would turn on me someday and part of that would be it would be an agent of a truly intelligent overlord somewhere out there.
A while back somebody pointed to a comic that showed an AI interacting with someone. The AI was drawn an amorphous multi-headed monster with a variety of tentacles and at the end of a tentacle was a puppet and the puppet was talking to the person.
I think that's an excellent visualization.

Edited by crazy
Jan. 7, 2025, 06:15:12 PM EST
|
Post #445,254
1/7/25 6:55:28 PM
1/7/25 6:55:28 PM
|

Scaling is a huge problem.
Complexity increases quadratically with number of tokens... https://newsletter.pragmaticengineer.com/p/scaling-chatgptScalability challenge from self-attention
Under the hood, we use the Transformer architecture, a key characteristic of which is that each token is aware of every other token. This approach is known as self-attention. A consequence is that the longer your text is – or context – the more math is needed.
Unfortunately, self attention scales quadratically. If you want the model to predict the 100th token, it needs to do about 10,000 operations. If you want the model to predict the 1,000th token, it needs to do about 1 million operations.
At first, this sounds like bad news. However, there are clever workarounds we can use to circumvent the quadratic scale problem. Before we get into how we solve it, we need to talk about the infrastructure powering ChatGPT. They seem to be hitting a wall with this approach. They're moving monstrous amounts of data around, all over the planet, to do fancy page-level auto-complete and make weird pictures of white women with too many fingers and pointy chins. It's not intelligence. Yet. And they may burn up the planet before they get there. They're setting huge amounts of money on fire, even on their most expensive plan: [...]
OpenAI isn’t profitable, despite having raised around $20 billion since its founding. The company reportedly expected losses of about $5 billion on revenue of $3.7 billion last year.
Expenditures like staffing, office rent, and AI training infrastructure are to blame. ChatGPT was at one point costing OpenAI an estimated $700,000 per day.
Recently, OpenAI admitted it needs “more capital than it imagined” as it prepares to undergo a corporate restructuring to attract new investments. To reach profitability, OpenAI is said to be considering increasing the price of its various subscription tiers. Altman also hinted in the Bloomberg interview that OpenAI may explore usage-based pricing for certain services.
OpenAI optimistically projects that its revenue will reach $11.6 billion this year and $100 billion in 2029, matching the current annual sales of Nestlé. That's 4 years from now. I think he's dreaming. Time will tell, of course. Best wishes, Scott.
|
Post #445,256
1/7/25 10:53:44 PM
1/7/25 10:53:44 PM
|

The Onion staff is crying
Microsoft is buying and restarting Three Mile Island to power their AI.
|
Post #445,257
1/8/25 3:26:33 AM
1/8/25 3:26:33 AM
|

No wonder; I always do that too when peeling them.
|
Post #445,278
1/10/25 12:30:07 PM
1/10/25 12:30:07 PM
|

Cry no more
|
Post #445,292
1/11/25 9:41:02 PM
1/11/25 9:41:02 PM
|

Oh, it's not that bad . . .
. . even with the back of my prep knife, but I bought a simple scaling tool to try next time I buy fish. I'll see if's any better.
|
Post #445,293
1/11/25 10:01:15 PM
1/11/25 10:01:15 PM
|

[ Nyuk, Nyuk, Nyuk ] :-D
|