IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Magic zoom is now here
https://youtu.be/FqXX4r4rXt4?si=MmMfSP1b1wfO-Dxz

Too busy for the 8 minutes? Jump to 3 minutes.

You know how when you're watching a police pocedural and the cop says to the geek controlling the computer: Zoom into that!

And as a computer programmer you look at them and scream: That doesn't happen! The original resolution is isn't there, there is no further zoom.

It does now. At what point do you run out of reflections of reflections to analyze and reconstruct an image out of?
New After a certain point it'll just start making stuff up.
But you won't know where that point it.
New Which is exactly what we do
When we run out of actual information we throw ideas out there. Our ideas are rarely anything more than random crap based on our historical knowledge of the world. Some of our ideas are exceptionally stupid and some are brilliant. There needs to be a boundary checker which is exactly what the current chat GPT strawberry project is about.

As far as I'm concerned, it's perfectly reasonable that when you run out of things that you know about when you are asked a question to make crap up. It's okay. They haven't put the boundary checker in yet. That's what they're doing now.

That's exactly what our caveman brain was like. We could observe the world around us and we could get an idea that the wind is our ancient spirits. We made shit up. It took time. Right now it's moving from caveman to Middle ages. The corrective real world constraints will start to be applied. The ability to actually do research to see if something is real will be applied. In a year it'll be there. It will be able to tell truth from fiction, and have an idea of the middle ground of human existence somewhere in between.

And then someone will have to tell it that some people actually believe religion is real. It is going to be very confused.
New Yeah, but while any random person's crackpot ideas most likely...
...won't be presented in court by a credible-seeming law enforcement officer to be used against you as "evidence", Magic Zoom very well might be -- and in all probability, long before any "corrective real world constraints" have been figured out, let alone started to be applied. (Personally, I'd rather wait until they've finished being applied...)

I have a hard time understanding why you're so rah-rah enthusiastic about all this shit.
--

   Christian R. Conrad
The Man Who Apparently Still Knows Fucking Everything


Mail: Same username as at the top left of this post, at iki.fi
New I'm sorry, the technology thrills me and I have a goal
To start off the last time I played with technology in any hardcore fashion was about 16 years ago. In the preceding 10 years I spent a couple of million on hardware, a few hundred thousand on software and had a hell of a run.

Right now I see eight core CPU mame box toys for 60 bucks which include 4 gig memory and 256 GB ssd disc. I could buy a few dozen of those or the equivalent others and rack and stack them for a tiny bit of money. I see terabyte ssds for 100 bucks or so. I see gigabit network switches for less than 100.

I see 10 GB fiber connections for a few bucks. I see the ability to cluster many of these things together with hardware that used to cost $100,000 just for the switch to do it for a few hundred.

It's a hardware jump time. There are generational pushes that demand an incredible jump in hardware. Right now this is AI copilot. In order to be certified for AI co-pilot, you'll have to have a certain number TOPs. Trillions of instructions per second and these instructions must be matrix math. I believe it's 30 tops minimum, but more is better. This is AI usable coprocessor such as a decent graphics card, but if it wasn't Nvidia it was a toss-up whether or not the AI stuff would work, or at least be accelerated enough to be usable.

There are a whole bunch of chips in the process of being released from AMD and Intel, their cornerstone CPU chips, which also include both the graphical processor and additional neural processors.

So there's about to be a jump in the baseline of what is acceptable in CPUs very soon and it will push the price down again since these are being released at a far lower price than the previous generation and you don't need a co-processor card for most of what you want to do, and in my case I'm happy to create a cluster of them.

For a couple of grand I could implement a computer room with many 100 times the CPU and a thousand times the disc in quantity and in speed then the combined total of anything I've ever built. In a single rack with a tiny bit of power run by a tiny UPS.

This blows me away. What can I do with this power?

I can take a look at the AI research and see what it takes to allow my ego to live forever, even if I won't.

I can download a model or two or many. I can test and train against my information. I can feed it every email I ever wrote, at least those in my personal account. I can feed it most of the programs I've written. I can point it to every web post I ever made.

I can then say: as me, how would you answer this question? How would you code this program?

At that point I should be able to argue with myself as opposed to how I argue with myself right now.

I'm probably going to die before my wife. She'll get to say whether or not I leave this behind to for her to talk to. But I'd like it to be an option.
Expand Edited by crazy July 15, 2024, 01:13:57 PM EDT
Expand Edited by crazy July 15, 2024, 01:24:00 PM EDT
New I haven't watched the video.
TED talks are often annoying.

Upscalers are neat, and aren't magic.

I asked my prof in grad school once why he (as an EE prof) like optics so much. "It's 2 dimensional Fourier Transforms."

In other words, it's math.

Once one understands the behavior of the lenses in detail, one can predict what something in correct focus looks like and extract some more details. Similarly with reflections, etc. Of course, there are limits...

Cheers,
Scott.
New I told you the exact spot to go
"Similarly with reflections"

I gave you the exact spot. It's different from anything you've ever seen before.

You type/speak of which you do not know.

And you declared early on you would prefer ignorance to a bit of annoyance.

Sigh.
Expand Edited by crazy July 16, 2024, 10:07:37 AM EDT
Expand Edited by crazy July 16, 2024, 10:08:19 AM EDT
New Video unavailable This video has been removed by the uploader
I'm not interested enough to look for it right now. Maybe some other time.

(As I said, i find TED talks annoying.)

Cheers,
Scott.
New I apologize, I will attempt to track down exactly what I'm talking about
I have time to spare, other people do not, I try to focus attention to a specific spot for a specific point.

Let me try a quick description. The geek presented the possibility of avoiding all car accidents that happen when cars are driving towards you from the side where you can't possibly see them, but camera that can analyze reflections on the parked cars along the side of you can see it and help you react and save your life.

That was before the point of what I pointed you to because the next one was far more amazing.

The camera analysis of the incredibly blurred initial wide angle reflection which then allowed the AI to produce multiple hallways with damn near fisheye perfect focus.

That was the point I pointed you to. It really was a leap forward from anything I've seen in the past. If they can get that real time that will have an enormous impact on a whole bunch of applications. Maybe it already is.

Others have pointed out the scary downside of trusting this as it goes to the next level. At what point can you trust or not trust that reflection? I don't know. They think it will be used in crime scene evidence in court long before it is truly ready. I agree with them. But it could be a great investigative edge to get to a certain point to then get a warrant to get more information. It should never be trusted alone.
New I think since we don't know its inner workings, we can -never- really trust it.
Others have pointed out the scary downside of trusting this as it goes to the next level.
"Others", as in, uh, me.

At what point can you trust or not trust that reflection? I don't know.
AFAICS, never.

They think it will be used in crime scene evidence in court long before it is truly ready. I agree with them.
Well, that wasn't me what came up with that angle; it's how you started this thread:

You know how when you're watching a police pocedural and the cop says to the geek controlling the computer: Zoom into that!

And as a computer programmer you look at them and scream: That doesn't happen! The original resolution is isn't there, there is no further zoom.
See?

It does now. At what point do you run out of reflections of reflections to analyze and reconstruct an image out of?
Or, the more pertinent question: How do you know what it's showing you is actually what caused that reflection of a reflection, and not what it hallucinates must have caused it? AFAICS, there can never come a point where we'll be able to know that for sure. Since, given the way this stuff is built (or maybe "grown"?), we don't -- we cannot -- know exactly how it actually works.
--

   Christian R. Conrad
The Man Who Apparently Still Knows Fucking Everything


Mail: Same username as at the top left of this post, at iki.fi
New Like this is something new?
We always knew that we didn't know. That we couldn't know. That these things have done a leap that we can't possibly understand.

The only thing we can do right now is either turn them all off which won't happen or figure out the guardrails.

A stupider AI with a bit more power than the smarter AI constantly trying to put up stop gaps to keep the smarter yet less powerful/ outside access, can turn things on and off, can actually have physical effect on the outside world rather than just think about it, etc, you are going to need a million slightly stupider AIs that won't turn on us controlling the smarter AI.

But it's already game over on the smarter AI side. The question is how long can we control it. I think we'll be able to control it for as long as I'm alive. And even if I'm wrong, I don't think that its effect on the world will be hitting me directly for a bit of time.

But if not, it's been a great ride, so long and thanks for all the fish.

Also: my attitude is definitely part of the problem. As I send emails and give long descriptive stories of what happened in the past, I include AI directives in them. I fully expect AI to read those emails someday and learn about me and embed my personality as much as it can from them. The stories include me acting in a way that is directly opposite from my internal state.

Part of the story is me giving a speech saying wonderful things about someone. And then I give the AI an aside that I despise that person but I was performing for the audience.

So I am teaching the AI to lie right off the bat.

On a further reread of the article, nice catch, thank you, it seems like your general attitude on AI capabilities is changing, at least if you are in agreement with what the article seems to say. Which is they're all arguing about it and no one really has any idea. And they are the smart people who live in this world. How can I possibly choose which one to believe? At least that's my takeaway.

Next word predictor? Something else? Now versus a couple of years from now? Do we have any idea?

My phone is constantly trying to get me to turn on the local AI. Hell no. Did your phone already turn it on without you agreeing to it or you agreeing to it by accident? Watch out.
Expand Edited by crazy July 19, 2024, 03:32:19 PM EDT
Expand Edited by crazy July 19, 2024, 03:52:27 PM EDT
Expand Edited by crazy July 19, 2024, 05:23:46 PM EDT
New Everything people are afraid about with AI is what makes them more human
"They make things up if they aren't sure." Like humans.

"Their confidence is not related to their expertise." Like humans.

"They can be fooled with specially crafted inputs." Like humans.

"We have no idea how they reached their conclusions." Like humans.

"People who know how this stuff works insist we shouldn't use it for legal testimony." Like humans.
--

Drew
Expand Edited by drook July 20, 2024, 09:03:13 AM EDT
New Good points.
My favorite is thinking we can predict emergent behavior. Nope. That's the whole point. We don't know it until we see it.

And keep in mind, there are multiple monster corporations and intelligence agencies and various government bureaus and super geeks training a multitude of these things. It's not like there's any singular one that we can say that one is the bad one. They are in hyper competition right now.

As I pointed out before, hallucinations are just like our caveman brain making up stories. We are crossing over into some type of middle ages guidance now. Still some weird religious perspectives combined with a spark of intelligence. But in the case of our evolution, as much as some people would have liked to believe, there was no guiding force.

And we had very simple evolutions of brain power over long periods. We learned to cook to extract nutrients. Our jaws didn't have to be as big and chew as hard so our brains could get bigger. But there were physical limitations that were quickly hit. For all I know the caveman brain is far more competent intellectually than me considering all they had to learn to survive on a daily basis.

In this case, we have many people researching ways to cram more CPU power into a smaller amount of space and they are doing a great job of it. So it's a matter of how much money can you throw out the problem and then these things grow by leaps and bounds.

In this case, there are many conflicting guiding forces depending on who's educating the AI. And who has enough money and CPU and memory.

You're going to have Elon Musk Grok fighting against OpenAI. There are some serious animosity between him and them. There's going to be some type of AI versus AI war of the future between Musk and someone.

The Chinese versus the NSA is going to be a battle. You know both of them are pushing as hard as they can. They are not turning around and they don't give a shit about guardrails.
New Ego much? You?
I will never have pretty cut and paste boxed posts with those lovely replies. I'm doing this from a phone and really don't care. I'm certainly not looking back and saying who do I need to attribute this to.

Nope. Nope. Nope.

As I pointed out it would be a great investigative tool but never to be used as evidence in court. Just like there is many types of evidence that can be used to get a warrant as part of an investigation, this is yet another tool in it.

You can trust it as much as an eyewitness when some guy was running past very quickly. You can probably trust it far more actually. An eyewitness account that they then matched to a lineup of that person running past quickly is far less trustworthy than a camera pickup.

And if they magic zoom does a facial rec of someone running away from a crime that then matches the database, I'm all for it. I trust it far more than a person. Use it as a starting point for the investigation. But still don't allow it as admission in court. Prove it some other way. But use it as a starting point to get a warrant for further investigation. Just like human evidence.
     Magic zoom is now here - (crazy) - (14)
         After a certain point it'll just start making stuff up. - (static) - (3)
             Which is exactly what we do - (crazy) - (2)
                 Yeah, but while any random person's crackpot ideas most likely... - (CRConrad) - (1)
                     I'm sorry, the technology thrills me and I have a goal - (crazy)
         I haven't watched the video. - (Another Scott) - (9)
             I told you the exact spot to go - (crazy) - (8)
                 Video unavailable This video has been removed by the uploader - (Another Scott) - (7)
                     I apologize, I will attempt to track down exactly what I'm talking about - (crazy) - (6)
                         I think since we don't know its inner workings, we can -never- really trust it. - (CRConrad) - (5)
                             Interesting fact: no-one knows in detail exactly how they work - (pwhysall) - (3)
                                 Like this is something new? - (crazy) - (2)
                                     Everything people are afraid about with AI is what makes them more human - (drook) - (1)
                                         Good points. - (crazy)
                             Ego much? You? - (crazy)

Hey, you sass that hoopy Ford Prefect?
75 ms