IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 1 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Make the next few years good ones.
I go back and forth.

There are signs that AGI isn't "attainable". We're running out of training material, being able to produce language tokens doesn't remotely resemble thought or planning, and so on and so forth.

However, any sort of AI alignment doesn't matter with the chuckleheads open sourcing everything so that bad state actors, millenialists, jihadists, and other extremist groups won't really need to work all that hard to misalign even a non-AGI AI.

Could either be the Great Filter, or a Star Trek post-scarcity society (ha), or hopefully more likely than the worst case at least, the human species muddling through as always and making a hash of things but managing to pull its ass out of the fire after a few big screw-ups.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New Yeah, it doesn't have to be excellent to scam people.
I think DeLong made a good case that it's not actually good enough for expert work (or maybe even for good teaching) - yet. But for the monsters, it only has to be good enough to get people to give them money in excess of the costs.

Presumably, most of the cost is in the training, and presumably MS and Google and the rest of the big players are going to keep their server farms of training data locked up. But anyone who wants to can scrape the web and steal everything that isn't locked down, so as storage continues to get cheaper, that will be less of a barrier.

A lot of this stuff is still buzz-wordy and click-baity, but there is some useful substance out there.

Phys.org:

The cosmos would look a lot better if Earth's atmosphere wasn't photo bombing it all the time.

Even images obtained by the world's best ground-based telescopes are blurry due to the atmosphere's shifting pockets of air. While seemingly harmless, this blur obscures the shapes of objects in astronomical images, sometimes leading to error-filled physical measurements that are essential for understanding the nature of our universe.

Now researchers at Northwestern University and Tsinghua University in Beijing have unveiled a new strategy to fix this issue. The team adapted a well-known computer-vision algorithm used for sharpening photos and, for the first time, applied it to astronomical images from ground-based telescopes. The researchers also trained the artificial intelligence (AI) algorithm on data simulated to match the Vera C. Rubin Observatory's imaging parameters, so, when the observatory opens next year, the tool will be instantly compatible.

While astrophysicists already use technologies to remove blur, the adapted AI-driven algorithm works faster and produces more realistic images than current technologies. The resulting images are blur-free and truer to life. They also are beautiful—although that's not the technology's purpose.

[...]


Cheers,
Scott.
New Training is cheap now too.
Stanford showed that you can cheaply train your own model given access to someone else's via an API.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New Samsung beat them to that punch...
https://www.theregister.com/2023/03/13/samsung_fake_moon_pics/

They're trading a quantifiable method for a black box that does "something" to make it look better, but no one is able to explain how or why. I have a hard time calling that science.
New It's already useful
I just listened to Seth Godin's latest podcast. He trained GPT-4 on the entirety of his blog and created a personal chatbot. He asked it several questions - the answers to be in-the-style-of him - and the entire podcast was just him reading the answers.

It clearly wasn't the most inspired writing, but little to give away that it wasn't human generated.

Then at the end he revealed that he didn't read it, either. The entire episode was narrated by a language model of his voice. I didn't have a clue. It was entirely believable.

tl;dr We can already generate reasonable answers based on specific people and have it narrated in their voice.
--

Drew
New The copied voices are ok
But if you know the person's voice well, or if you listen to them side by side, the difference is obvious.

I'm not saying that won't fool a lot of people, just that it can be detected at this point.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New It's already effectively passing the Turing test
Tell someone, "Listen closely to this and tell me if it's a person or synthesized," and you'll probably get a bunch who can tell ... as well as a bunch of false positives on the real people.

But play it without prompting them to expect it and I think it's very unlikely anyone would notice. At least for non-dramatic reading of straight information.
--

Drew
New As I said, it will fool a lot of people
There are companies like play.ht that can do similar things, and in multiple languages.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New Not Welsh, I'll bet :-D
--

Drew
New You'd lose that bet.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New Well that's incomprehensible
But then, it's a different language.
--

Drew
New language tokens &c
being able to produce language tokens doesn't remotely resemble thought or planning
True enough, but machine sentience, when and if it arrives, won’t necessarily resemble thought or planning as we understand these. I still think that the 2014 film Ex Machina made this point brilliantly: the audience is prepared throughout, along with the protagonist, to see the fembot “Ava” as a human female analogue, and at the end the film makes clear that it—not her—is neither female nor human.

Even absent sentience and intentionality as we presently understand these, the (primitive, as measured by the standards we’ll apply before the end of next year, if not sooner) sundry iterations at large are beginning to simulate such elements, and the difference between the consequences of blind and of “directed” AI actions will, I predict, rapidly blur.

cordially, and I-told-you-so,
New Oh, they don't need thought or planning to be dangerous.
The difference between actual thought and planning, and a high fidelity impersonation of actual thought and planning, will be lost on those who are simulated out of existence.

"Blindsight" by Peter Watts is a good exploration of such topics.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New best description of ITIL I have seen yet
The difference between actual thought and planning, and a high fidelity impersonation of actual thought and planning
"Science is the belief in the ignorance of the experts" – Richard Feynman
New 😂
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New Then ISO 9000 to thoroughly document the impersonation
--

Drew
     An alarmist take on the state of the art - (rcareaga) - (16)
         Make the next few years good ones. - (malraux) - (15)
             Yeah, it doesn't have to be excellent to scam people. - (Another Scott) - (9)
                 Training is cheap now too. - (malraux)
                 Samsung beat them to that punch... - (scoenye)
                 It's already useful - (drook) - (6)
                     The copied voices are ok - (malraux) - (5)
                         It's already effectively passing the Turing test - (drook) - (4)
                             As I said, it will fool a lot of people - (malraux) - (3)
                                 Not Welsh, I'll bet :-D -NT - (drook) - (2)
                                     You'd lose that bet. - (malraux) - (1)
                                         Well that's incomprehensible - (drook)
             language tokens &c - (rcareaga) - (4)
                 Oh, they don't need thought or planning to be dangerous. - (malraux) - (3)
                     best description of ITIL I have seen yet - (boxley) - (2)
                         😂 -NT - (malraux)
                         Then ISO 9000 to thoroughly document the impersonation -NT - (drook)

Powered by Price-Pfister.
68 ms