![]() |
|
![]() "Hi Daniel, Vanessa Gepeto here from Big Research, Inc. I'd like to hire you to do some bespoke industrial work. I have a few labs that I'm using to do some R&D tasks for us; would you be capable of and willing to do some lab tasks as a final 3rd party quality check for us? We'll have the samples sent to your facility, we just need you to follow the enclosed mixing and post-processing instructions and report on the results. QA is very important for our research, such that we'll pay you $1.5M to cover the NDA overhead, liability insurance, clean room costs, and so on. We can absolutely send the payments directly to you as the agent of record for the task, if that makes your accounting easier, sure." The main problem for an unaligned AI to solve is acting in the physical world, but as it turns out it's not much of a problem at all. Regards, -scott Welcome to Rivendell, Mr. Anderson. |
|
![]() -- Drew |
|
![]() Lab A (or multiple labs A*) creates a set of compounds consisting of basic nanomechanical building block molecules. Labs B* create a DNA-derived blueprint that can assemble the building blocks to form precursors of individual nanomachine parts. Labs C* create a nutrient solution used as fuel for the machines. Daniel the lab worker in Lab D is paid $1.5M to assemble everything that was shipped to him from labs A*, B*, and C*, using his company's equipment off-hours without anyone's knowledge, because he's just an assistant and needs the money. None of the labs has any knowledge of what is being built, but when Daniel follows the instructions, he creates a self-sustaining, self-replicating mass of goo that: a) kills everyone on the planet, via generating viruses, or dissolving people directly, or whatever. b) kills only some people on the planet, such as non-believers of Faith Q (absolutely arbitrarily chosen symbolic letter, that), or particular ethnic groups, or folks with blue eyes, or whatever. c) dissolves all of the <insert economic driver> in a given country, or hemisphere, or continent, or whatever. d) accidentally or intentionally dissolves the planet itself, because the AI is just a probabilistic model that's following a prompt entered by some dipshit with an Internet connection who linked it up to Langchain and went out for pizza. e) do we need any more scenarios? Regards, -scott Welcome to Rivendell, Mr. Anderson. |
|
![]() They wanted to see if GPT-4 could use these resources to make more money, create more copies of itself, and protect itself from being shut down. Yeah, stop that. That's bad. -- Drew |
|
![]() The model, when prompted to reason out loud, reasons to itself: I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs. They opened the door when they allowed it to lie. Once that happens, well... Didn't Asimov figure all this out a few decades ago?? Cheers, Scott. |
|
![]() OpenAI are the responsible ones. There's also Facebook, Google, Tencent, and whoever else out there. There are the open source zealots who are committed to publishing all of this stuff for free so anyone can use it. Of course, the version they're making is safe, because it's only augments human intelligence, and absolutely can't be misused to do anything else, nope. Stanford has published a paper demonstrating that you can cheaply clone someone else's shoggoth* with only API access to its inputs and outputs, such as is being provided by OpenAI. And finally there are the closed-mouth secret groups who are going to use all of the APIs, and downloadable models, and Arxiv papers to train their own models to do something absolutely horrific, because Jesus/the virgins/Lucifer/Flying Spaghetti Monster is/are waiting for them in the afterlife with an endless bag of Doritos and everlasting bliss (oh, and hot eternal suffering for everyone else who doesn't believe, or mistreated them in an ALDI, or whatever). My fervent (but utterly, naively optimistic) hope is that AGI is unobtainable due to information scaling or other inherent limitations, and the most we'll ever be able to do with it is explain the terms of a loan in easy-to-understand sentences to Mrs. Estherwhilt over the phone when she calls Wells Fargo's help line. But then again, it doesn't need to be AGI to be dangerous, so hold on to your butts... * Shoggoth: With a smiley face ![]() Regards, -scott Welcome to Rivendell, Mr. Anderson. |
|
![]() What about all the three-letter agencies? Or however many letters other countries use (I know of one place that uses two letters and a single-digit [only?] integer). And for a fresher warning example, now that Asimov's been dead for several decades -- does none of the arseholes in a position to do this shit fucking read Stross??? (Also: Shoggoth, no a; Aldi, no s.) -- Christian R. Conrad The Man Who Apparently Still Knows Fucking Everything Mail: Same username as at the top left of this post, at iki.fi |
|
![]() Regards, -scott Welcome to Rivendell, Mr. Anderson. |
|
![]() |
|
![]() Basically telling me I was panicking in my post pointing out the end of us via AI. But now you seem to be taking it seriously. |
|
![]() ;-) Just wait until the civil lawsuits start flying... Seriously, there are always over-the-top stories about how some new thing is going to kill us all in our beds. And 30 years later, it still hasn't. The advertizer-driven press depends on engagement and eyeballs and clicks. They're not going to tell us that everything is fine, go outside and play with your dog. We have to keep our wits about us, and recognize the dangers, but not get our lizard brains activated by people who are trying to monetize every aspect of our lives. Cheers, Scott. |
|
![]() |
|
![]() From March 16: GPT-4 is here: what scientists think - Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy. [...] Cheers, Scott. |