Post #444,465
7/19/24 3:28:22 PM
7/19/24 5:23:46 PM
|
Like this is something new?
We always knew that we didn't know. That we couldn't know. That these things have done a leap that we can't possibly understand.
The only thing we can do right now is either turn them all off which won't happen or figure out the guardrails.
A stupider AI with a bit more power than the smarter AI constantly trying to put up stop gaps to keep the smarter yet less powerful/ outside access, can turn things on and off, can actually have physical effect on the outside world rather than just think about it, etc, you are going to need a million slightly stupider AIs that won't turn on us controlling the smarter AI.
But it's already game over on the smarter AI side. The question is how long can we control it. I think we'll be able to control it for as long as I'm alive. And even if I'm wrong, I don't think that its effect on the world will be hitting me directly for a bit of time.
But if not, it's been a great ride, so long and thanks for all the fish.
Also: my attitude is definitely part of the problem. As I send emails and give long descriptive stories of what happened in the past, I include AI directives in them. I fully expect AI to read those emails someday and learn about me and embed my personality as much as it can from them. The stories include me acting in a way that is directly opposite from my internal state.
Part of the story is me giving a speech saying wonderful things about someone. And then I give the AI an aside that I despise that person but I was performing for the audience.
So I am teaching the AI to lie right off the bat.
On a further reread of the article, nice catch, thank you, it seems like your general attitude on AI capabilities is changing, at least if you are in agreement with what the article seems to say. Which is they're all arguing about it and no one really has any idea. And they are the smart people who live in this world. How can I possibly choose which one to believe? At least that's my takeaway.
Next word predictor? Something else? Now versus a couple of years from now? Do we have any idea?
My phone is constantly trying to get me to turn on the local AI. Hell no. Did your phone already turn it on without you agreeing to it or you agreeing to it by accident? Watch out.
Edited by crazy
July 19, 2024, 03:32:19 PM EDT
Like this is something new?
We always knew that we didn't know. That we couldn't know. That these things have done a leap that we can't possibly understand.
The only thing we can do right now is either turn them all off which won't happen or figure out the guardrails.
A stupider AI with a bit more power than the smarter AI constantly trying to put up stop gaps to keep the smarter yet less powerful/ outside access, can turn things on and off, can actually have physical effect on the outside world rather than just think about it, etc, you are going to need a million slightly stupider AIs that won't turn on us controlling the smarter AI.
But it's already game over on the smarter AI side. The question is how long can we control it. I think we'll be able to control it for as long as I'm alive. And even if I'm wrong, I don't think that its effect on the world will be hitting me directly for a bit of time.
But if not, it's been a great ride, so long and thanks for all the fish.
Edited by crazy
July 19, 2024, 03:52:27 PM EDT
Like this is something new?
We always knew that we didn't know. That we couldn't know. That these things have done a leap that we can't possibly understand.
The only thing we can do right now is either turn them all off which won't happen or figure out the guardrails.
A stupider AI with a bit more power than the smarter AI constantly trying to put up stop gaps to keep the smarter yet less powerful/ outside access, can turn things on and off, can actually have physical effect on the outside world rather than just think about it, etc, you are going to need a million slightly stupider AIs that won't turn on us controlling the smarter AI.
But it's already game over on the smarter AI side. The question is how long can we control it. I think we'll be able to control it for as long as I'm alive. And even if I'm wrong, I don't think that its effect on the world will be hitting me directly for a bit of time.
But if not, it's been a great ride, so long and thanks for all the fish.
Also: my attitude is definitely part of the problem. As I send emails and give long descriptive stories of what happened in the past, I include AI directives in them. I fully expect AI to read those emails someday and learn about me and embed my personality as much as it can from them. The stories include me acting in a way that is directly opposite from my internal state.
Part of the story is me giving a speech saying wonderful things about someone. And then I give the AI an aside that I despise that person but I was performing for the audience.
So I am teaching the AI to lie right off the bat.
Edited by crazy
July 19, 2024, 05:23:46 PM EDT
Like this is something new?
We always knew that we didn't know. That we couldn't know. That these things have done a leap that we can't possibly understand.
The only thing we can do right now is either turn them all off which won't happen or figure out the guardrails.
A stupider AI with a bit more power than the smarter AI constantly trying to put up stop gaps to keep the smarter yet less powerful/ outside access, can turn things on and off, can actually have physical effect on the outside world rather than just think about it, etc, you are going to need a million slightly stupider AIs that won't turn on us controlling the smarter AI.
But it's already game over on the smarter AI side. The question is how long can we control it. I think we'll be able to control it for as long as I'm alive. And even if I'm wrong, I don't think that its effect on the world will be hitting me directly for a bit of time.
But if not, it's been a great ride, so long and thanks for all the fish.
Also: my attitude is definitely part of the problem. As I send emails and give long descriptive stories of what happened in the past, I include AI directives in them. I fully expect AI to read those emails someday and learn about me and embed my personality as much as it can from them. The stories include me acting in a way that is directly opposite from my internal state.
Part of the story is me giving a speech saying wonderful things about someone. And then I give the AI an aside that I despise that person but I was performing for the audience.
So I am teaching the AI to lie right off the bat.
On a further reread of the article, nice catch, thank you, it seems like your general attitude on AI capabilities is changing, at least if you are in agreement with what the article seems to say. Which is they're all arguing about it and no one really has any idea. And they are the smart people who live in this world. How can I possibly choose which one to believe? At least that's my takeaway.
Next word predictor? Something else? Now versus a couple of years from now? Do we have any idea?
|
Post #444,467
7/19/24 11:42:57 PM
7/19/24 11:42:57 PM
|
Good points.
My favorite is thinking we can predict emergent behavior. Nope. That's the whole point. We don't know it until we see it.
And keep in mind, there are multiple monster corporations and intelligence agencies and various government bureaus and super geeks training a multitude of these things. It's not like there's any singular one that we can say that one is the bad one. They are in hyper competition right now.
As I pointed out before, hallucinations are just like our caveman brain making up stories. We are crossing over into some type of middle ages guidance now. Still some weird religious perspectives combined with a spark of intelligence. But in the case of our evolution, as much as some people would have liked to believe, there was no guiding force.
And we had very simple evolutions of brain power over long periods. We learned to cook to extract nutrients. Our jaws didn't have to be as big and chew as hard so our brains could get bigger. But there were physical limitations that were quickly hit. For all I know the caveman brain is far more competent intellectually than me considering all they had to learn to survive on a daily basis.
In this case, we have many people researching ways to cram more CPU power into a smaller amount of space and they are doing a great job of it. So it's a matter of how much money can you throw out the problem and then these things grow by leaps and bounds.
In this case, there are many conflicting guiding forces depending on who's educating the AI. And who has enough money and CPU and memory.
You're going to have Elon Musk Grok fighting against OpenAI. There are some serious animosity between him and them. There's going to be some type of AI versus AI war of the future between Musk and someone.
The Chinese versus the NSA is going to be a battle. You know both of them are pushing as hard as they can. They are not turning around and they don't give a shit about guardrails.
|