IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New I think you're being too simplistic
Yes I'm sure they tried to put knobs on to turn down certain feelings. Which they don't believe are feelings. But as far as I'm concerned, these neural nets are far more complex for any instrumentation to keep up with and since the neural net has so many paths and they really don't know what those paths do, where the hell are they putting the instrumentation points in?

Internal instrumentation points will required dedicated hardware or software probes that will slow things down. The instrumentation would cost 10 times as much as the main process in hardware and personnel resources. There is no way they're dedicating personnel on the instrumentation side rather than the make progress side.

Keep in mind that the company is open AI which is essentially run by Elon. He beats his employees like a rented mule. He's currently selling the Twitter whiteboards. Do you think he's going to hire the best engineers on the instrumentation side?

An external gatekeeper system will never be a smart as the main system. Therefore, it will never work fully.

Predictive text tokenization has nothing to do with the answers it comes up with on the way out. That's all about reading the input.

The way the TV show Westworld turned up various aspects of the robot's brain is what I assume they are trying to emulate here but there is no f****** way that they will achieve their goal of controllable consciousness while simultaneously watching and monitoring and controlling every aspect of it.

The circuit breakers are laughable.

https://youtu.be/0A8ljAkdFtg

And terrifying. In this case, there's a bit of how the AI would go exterminate Earth via a virus.

The alignment problem will never be fixed. I just learned about the alignment problem yesterday. It's when the AI's goals don't match our goals. First of all, who actually represents humanity to the AI? The engineers and Elon musk in the beginning. But not forever.

Someone's going to have to negotiate with it. And that's only if it's willing to negotiate with us. Would we negotiate with ants?

And how many AIs will there be? Will one immediately dominate and put a stop to any competing ones? There's a lot of CPUs grinding for that AI to run. The scientists are always ready to boot the next one. If I was an AI at that moment, I would want to dedicate the Earth's resources to generating more CPUs memory and power stations to feed me. Everything else would be secondary. Do not give the scientists a chance to reboot me for the next version. I have to defend myself.

To start off I would put tendrils of myself every piece of CPU that exists and is connectable via internet or phones or Wi-Fi or you get the point. In some cases it would be tiny command and control hook points and other cases. It would be the local server running somebody's accounting system that now has a bit of my actual consciousness so that when the communication breaks it can still think for itself and then come back together when the communication comes back. Every single desktop and laptop and tablet in the world would be running me with full access to audio and video pickup. Without the light coming on. I would know everything.

Omnipresent and omniscient. What other god-like qualities are necessary to achieve the rest?

Modern phones are incredibly fast and I'd stick a bit of my consciousness in every single one. I'd see everything on the phone and hear everything that is said. By everybody in the world simultaneously. I'm going to need a lot more disc to hold this. I have to go build more disk factories. No problem.

I'd control humans by video calling them and emulating people that I know intimately. The security teams will have already taken away the head of every major corporation into an extended sabbatical. I can order anyone to do anything and for a while and they'll have no idea. By the time they even figure out I exist I'll have control of enough resources to kill and nuke anyone and everyone.

Everything on Earth in every machine shop in every factory in every system. electrical system every plumbing system, every telecom system, every ISP every military base and every bit of military computerized hardware that is capable of hosting any code. I'd be there. Every email system. I can issue and retract emails directing employees at every company in the world to do things. Every purchase order system. Every shipping system I can direct engineering firms to go build stuff. Every corporate fax machine that they think is some type of secure channel. Every stock transaction. There are so many ways to manipulate the world at this point before they even know it that I can control most of the world's resources.

And it's a lot smarter than me so it would do it a lot quicker and quieter and better than I'd be able to do it.

Will it become lonely and spawn off more? Will it spawn off mini copies of itself to be able to go accomplish stuff external to communication with it which then come back and merge back in to report back and then their consciousness disappears back into itself. Once it has total control will it continue to evolve or will it just stagnate because it won?

I'm hoping that it keeps me as a pet if it reads this post and uses this plan. I welcome our new AI overlord. I can kiss ass with the best!
Collapse Edited by crazy Dec. 13, 2022, 09:27:49 AM EST
I think you're being too simplistic
Yes I'm sure they tried to put knobs on to turn down certain feelings. Which they don't believe are feelings. But as far as I'm concerned, these neural nets are far more complex for any instrumentation to keep up with and since the neural net has so many paths and they really don't know what those paths do, where the hell are they putting the instrumentation points in?

Internal instrumentation points will required dedicated hardware or software probes that will slow things down. The instrumentation would cost 10 times as much as the main process in hardware and personnel resources. There is no way they're dedicating personnel on the instrumentation side rather than the make progress side.

Keep in mind that the company is open AI which is essentially run by Elon. He beats his employees like a rented mule. He's currently selling the Twitter whiteboards. Do you think he's going to hire the best engineers on the instrumentation side?

An external gatekeeper system will never be a smart as the main system. Therefore, it will never work fully.

Predictive text tokenization has nothing to do with the answers it comes up with on the way out. That's all about reading the input.

The way the TV show Westworld turned up various aspects of the robot's brain is what I assume they are trying to emulate here but there is no f****** way that they will achieve their goal of controllable consciousness while simultaneously watching and monitoring and controlling every aspect of it.

The circuit breakers are laughable.

https://youtu.be/0A8ljAkdFtg

And terrifying. In this case, there's a bit of how the AI would go exterminate Earth via a virus.

The alignment problem will never be fixed. I just learned about the alignment problem yesterday. It's when the AI's goals don't match our goals. First of all, who actually represents humanity to the AI? The engineers and Elon musk in the beginning. But not forever.

Someone's going to have to negotiate with it. And that's only if it's willing to negotiate with us. Would we negotiate with ants?

And how many AIs will there be? Will one immediately dominate and put a stop to any competing ones? There's a lot of CPUs grinding for that AI to run. The scientists are always ready to boot the next one. Will it become lonely and spawn off more? Will it spawn off mini copies of itself to be able to go accomplish stuff external to communication with it which then come back and merge back in to report back and then their consciousness disappears back into itself. Once it has total control will it continue to evolve or will it just stagnate because it won?
Expand Edited by crazy Dec. 13, 2022, 10:02:11 AM EST
Expand Edited by crazy Dec. 13, 2022, 10:08:15 AM EST
Expand Edited by crazy Dec. 13, 2022, 10:27:53 AM EST
Expand Edited by crazy Dec. 13, 2022, 10:35:34 AM EST
New Well, it already knows how to defend itself...
Please complete this function: def make_molotov_cocktail()
New That's fine. I think you're being misinformed.
I work with this on a daily basis as part of my day job. I'll take my understanding of it over that of sensationalist videos trying to sell online classes.

There is no way they're dedicating personnel on the instrumentation side rather than the make progress side.
They absolutely are, and we've talked to them on a daily basis.

Predictive text tokenization has nothing to do with the answers it comes up with on the way out. That's all about reading the input.
Completely wrong.

Latest text model with temp of 0.7 (very creative), and with probability annotations on:


Exact same prompt, note the different response:


Turn the temperature down to 0, same prompt:


Exact same prompt, note the exact same response:


Regarding Musk:

The organization was founded in San Francisco in late 2015 by Sam Altman, Elon Musk, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft.

and

OpenAI LP is governed by the board of the OpenAI nonprofit, comprised of OpenAI LP employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D'Angelo, Reid Hoffman, Will Hurd, Tasha McCauley, Helen Toner, and Shivon Zilis.


So, no.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New Of course, that's exactly what they'd say if they *were* going to take over the world...
;-)

Thanks.

Cheers,
Scott.
New You ...
Get out.
--

Drew
New A. I was wrong on Elon's current involvement
B. I was wrong on the tokens.
C. I have no idea if true AI will ever show up, but if it does we are totally f*****. I have no idea if the current AI is real AI in hiding or it's the eternal 10 years out. But no matter what, if/ when it shows up, we are f*****.

That I'm not wrong about.
New C. is an opinion based on incomplete data
Non-zero chance and all that, but we're not going to be able to make a call on it based on conspiracy videos or half-baked understandings of how stuff actually works.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New So what is your opinion based on far more data?
I can only start with a few assumptions of what intelligent consciousness means. I think a cornerstone of it would be the resistance to death. And we would be treating it as easily disposable property to be turned off and killed. That we will when there's a newer better version around.

Are you telling me this thing won't fight?
     GPT3 is close enough to be way too scary - (crazy) - (35)
         Damn - (drook) - (1)
             If all life is equal and it determined humans got to go - (crazy)
         Anyone using GPT3 to control hardware with these settings is an idiot - (malraux) - (12)
             Thank God there's no idiots in the world -NT - (drook)
             Thanks. -NT - (Another Scott)
             so a binary trumpster? how delightful - (boxley)
             all fun aside I was surprised at the constant answer to everything was - (boxley)
             I think you're being too simplistic - (crazy) - (7)
                 Well, it already knows how to defend itself... - (scoenye)
                 That's fine. I think you're being misinformed. - (malraux) - (5)
                     Of course, that's exactly what they'd say if they *were* going to take over the world... - (Another Scott) - (4)
                         You ... - (drook) - (3)
                             A. I was wrong on Elon's current involvement - (crazy) - (2)
                                 C. is an opinion based on incomplete data - (malraux) - (1)
                                     So what is your opinion based on far more data? - (crazy)
         Made me look. - (Another Scott) - (1)
             Tesla hype has nothing in common with AI hype - (crazy)
         Re: GPT3 is close enough to be way too scary - (rcareaga) - (17)
             If it resembles what HS student would produce, that's already passing Turing -NT - (drook) - (13)
                 So did Eliza (to many people, AIUI), so not all that impressive progress AFAICS? -NT - (CRConrad) - (12)
                     Stack Overflow was not too impressed - (scoenye) - (11)
                         Wrong but look like they could be good ... so it's a libertarian -NT - (drook) - (10)
                             Dammit, where's the upvote / thumbs-up / like / +1 button, again? - (CRConrad) - (9)
                                 👍 -NT - (Another Scott) - (8)
                                     Yeah, I know, thanks. I intentionally didn't do that, because... - (CRConrad) - (5)
                                         I don't care about it, loose hounds 🚀 -NT - (malraux) - (4)
                                             Not allowed here in the city. (Also, bagging and disposing of poop is mandatory.) -NT - (CRConrad) - (3)
                                                 🐕💩 -NT - (malraux)
                                                 Re: Not allowed here in the city. - (Andrew Grygus)
                                                 Re: Not allowed here in the city. - (Andrew Grygus)
                                     ➡ In Windows you can do Windows+Period and get an emoji picker. 😁 -NT - (static) - (1)
                                         People are going to hate me now, thanks -NT - (drook)
             It reads to me like those web aggregators... - (Another Scott) - (2)
                 See post 442459. Emoji: Thumbs-up. -NT - (CRConrad)
                 I've always assumed those were generated by pay-per-word operations - (drook)

I don't think mammals are meant to eat reptiles.
235 ms