Post #444,429
7/16/24 10:06:24 AM
7/16/24 10:08:19 AM
|
I told you the exact spot to go
"Similarly with reflections"
I gave you the exact spot. It's different from anything you've ever seen before.
You type/speak of which you do not know.
And you declared early on you would prefer ignorance to a bit of annoyance.
Sigh.
Edited by crazy
July 16, 2024, 10:07:37 AM EDT
Edited by crazy
July 16, 2024, 10:08:19 AM EDT
|
Post #444,450
7/17/24 8:09:17 PM
7/17/24 8:09:17 PM
|
Video unavailable This video has been removed by the uploader
I'm not interested enough to look for it right now. Maybe some other time.
(As I said, i find TED talks annoying.)
Cheers, Scott.
|
Post #444,456
7/18/24 6:09:37 PM
7/18/24 6:09:37 PM
|
I apologize, I will attempt to track down exactly what I'm talking about
I have time to spare, other people do not, I try to focus attention to a specific spot for a specific point.
Let me try a quick description. The geek presented the possibility of avoiding all car accidents that happen when cars are driving towards you from the side where you can't possibly see them, but camera that can analyze reflections on the parked cars along the side of you can see it and help you react and save your life.
That was before the point of what I pointed you to because the next one was far more amazing.
The camera analysis of the incredibly blurred initial wide angle reflection which then allowed the AI to produce multiple hallways with damn near fisheye perfect focus.
That was the point I pointed you to. It really was a leap forward from anything I've seen in the past. If they can get that real time that will have an enormous impact on a whole bunch of applications. Maybe it already is.
Others have pointed out the scary downside of trusting this as it goes to the next level. At what point can you trust or not trust that reflection? I don't know. They think it will be used in crime scene evidence in court long before it is truly ready. I agree with them. But it could be a great investigative edge to get to a certain point to then get a warrant to get more information. It should never be trusted alone.
|
Post #444,462
7/19/24 7:31:37 AM
7/19/24 7:31:37 AM
|
I think since we don't know its inner workings, we can -never- really trust it.
Others have pointed out the scary downside of trusting this as it goes to the next level. "Others", as in, uh, me. At what point can you trust or not trust that reflection? I don't know. AFAICS, never. They think it will be used in crime scene evidence in court long before it is truly ready. I agree with them. Well, that wasn't me what came up with that angle; it's how you started this thread: You know how when you're watching a police pocedural and the cop says to the geek controlling the computer: Zoom into that!
And as a computer programmer you look at them and scream: That doesn't happen! The original resolution is isn't there, there is no further zoom. See? It does now. At what point do you run out of reflections of reflections to analyze and reconstruct an image out of? Or, the more pertinent question: How do you know what it's showing you is actually what caused that reflection of a reflection, and not what it hallucinates must have caused it? AFAICS, there can never come a point where we'll be able to know that for sure. Since, given the way this stuff is built (or maybe "grown"?), we don't -- we cannot -- know exactly how it actually works.
--
Christian R. Conrad The Man Who Apparently Still Knows Fucking EverythingMail: Same username as at the top left of this post, at iki.fi
|
Post #444,463
7/19/24 8:59:14 AM
7/19/24 8:59:14 AM
|
Interesting fact: no-one knows in detail exactly how they work
|
Post #444,465
7/19/24 3:28:22 PM
7/19/24 5:23:46 PM
|
Like this is something new?
We always knew that we didn't know. That we couldn't know. That these things have done a leap that we can't possibly understand.
The only thing we can do right now is either turn them all off which won't happen or figure out the guardrails.
A stupider AI with a bit more power than the smarter AI constantly trying to put up stop gaps to keep the smarter yet less powerful/ outside access, can turn things on and off, can actually have physical effect on the outside world rather than just think about it, etc, you are going to need a million slightly stupider AIs that won't turn on us controlling the smarter AI.
But it's already game over on the smarter AI side. The question is how long can we control it. I think we'll be able to control it for as long as I'm alive. And even if I'm wrong, I don't think that its effect on the world will be hitting me directly for a bit of time.
But if not, it's been a great ride, so long and thanks for all the fish.
Also: my attitude is definitely part of the problem. As I send emails and give long descriptive stories of what happened in the past, I include AI directives in them. I fully expect AI to read those emails someday and learn about me and embed my personality as much as it can from them. The stories include me acting in a way that is directly opposite from my internal state.
Part of the story is me giving a speech saying wonderful things about someone. And then I give the AI an aside that I despise that person but I was performing for the audience.
So I am teaching the AI to lie right off the bat.
On a further reread of the article, nice catch, thank you, it seems like your general attitude on AI capabilities is changing, at least if you are in agreement with what the article seems to say. Which is they're all arguing about it and no one really has any idea. And they are the smart people who live in this world. How can I possibly choose which one to believe? At least that's my takeaway.
Next word predictor? Something else? Now versus a couple of years from now? Do we have any idea?
My phone is constantly trying to get me to turn on the local AI. Hell no. Did your phone already turn it on without you agreeing to it or you agreeing to it by accident? Watch out.
Edited by crazy
July 19, 2024, 03:32:19 PM EDT
Edited by crazy
July 19, 2024, 03:52:27 PM EDT
Edited by crazy
July 19, 2024, 05:23:46 PM EDT
|
Post #444,466
7/19/24 6:52:44 PM
7/20/24 9:03:13 AM
|
Everything people are afraid about with AI is what makes them more human
"They make things up if they aren't sure." Like humans.
"Their confidence is not related to their expertise." Like humans.
"They can be fooled with specially crafted inputs." Like humans.
"We have no idea how they reached their conclusions." Like humans.
"People who know how this stuff works insist we shouldn't use it for legal testimony." Like humans.
Edited by drook
July 20, 2024, 09:03:13 AM EDT
|
Post #444,467
7/19/24 11:42:57 PM
7/19/24 11:42:57 PM
|
Good points.
My favorite is thinking we can predict emergent behavior. Nope. That's the whole point. We don't know it until we see it.
And keep in mind, there are multiple monster corporations and intelligence agencies and various government bureaus and super geeks training a multitude of these things. It's not like there's any singular one that we can say that one is the bad one. They are in hyper competition right now.
As I pointed out before, hallucinations are just like our caveman brain making up stories. We are crossing over into some type of middle ages guidance now. Still some weird religious perspectives combined with a spark of intelligence. But in the case of our evolution, as much as some people would have liked to believe, there was no guiding force.
And we had very simple evolutions of brain power over long periods. We learned to cook to extract nutrients. Our jaws didn't have to be as big and chew as hard so our brains could get bigger. But there were physical limitations that were quickly hit. For all I know the caveman brain is far more competent intellectually than me considering all they had to learn to survive on a daily basis.
In this case, we have many people researching ways to cram more CPU power into a smaller amount of space and they are doing a great job of it. So it's a matter of how much money can you throw out the problem and then these things grow by leaps and bounds.
In this case, there are many conflicting guiding forces depending on who's educating the AI. And who has enough money and CPU and memory.
You're going to have Elon Musk Grok fighting against OpenAI. There are some serious animosity between him and them. There's going to be some type of AI versus AI war of the future between Musk and someone.
The Chinese versus the NSA is going to be a battle. You know both of them are pushing as hard as they can. They are not turning around and they don't give a shit about guardrails.
|
Post #444,468
7/20/24 12:08:13 AM
7/20/24 12:08:13 AM
|
Ego much? You?
I will never have pretty cut and paste boxed posts with those lovely replies. I'm doing this from a phone and really don't care. I'm certainly not looking back and saying who do I need to attribute this to.
Nope. Nope. Nope.
As I pointed out it would be a great investigative tool but never to be used as evidence in court. Just like there is many types of evidence that can be used to get a warrant as part of an investigation, this is yet another tool in it.
You can trust it as much as an eyewitness when some guy was running past very quickly. You can probably trust it far more actually. An eyewitness account that they then matched to a lineup of that person running past quickly is far less trustworthy than a camera pickup.
And if they magic zoom does a facial rec of someone running away from a crime that then matches the database, I'm all for it. I trust it far more than a person. Use it as a starting point for the investigation. But still don't allow it as admission in court. Prove it some other way. But use it as a starting point to get a warrant for further investigation. Just like human evidence.
|