IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New “This is only a test.” “I thought as much.”
Seen elsewhere. I should preface this by observing that yes, as yet these LLMs (Large Language Modules) have not achieved sentience, and that for all I know—and I lack the technical competence to evaluate work in this field—the entire approach may ultimately be seen as a dead end. This being said, instances like this one certainly demonstrate that passing scores in the Turing Test (as actually framed, rather than as popularly misunderstood: the ability of an artificial system to exhibit, not necessarily to manifest, conscious behavior) are rapidly receding in the rear-view mirror (empases as in the original Xitter thread, but presumably not from the LLM):
Fun story from our internal testing on Claude 3 Opus. It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval.

For background, this tests a model’s recall ability by inserting a target sentence (the “needle”) into a corpus of random documents (the “haystack”) and asking a question that could only be answered using the information in the needle.

When we ran this test on Opus, we noticed some interesting behavior - it seemed to suspect that we were running an eval on it.

Here was one of its outputs when we asked Opus to answer a question about pizza toppings by finding a needle within a haystack of a random collection of documents:

Here is the most relevant sentence in the documents:

“The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.”

However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping “fact” may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.
Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.

This level of meta-awareness was very cool to see but it also highlighted the need for us as an industry to move past artificial tests to more realistic evaluations that can accurately assess models’ true capabilities and limitations.
Someone in the comment thread observed that a human observer will be more inclined to attribute sentience to the model because it is structured to answer in the first “person,” and that he might be less so inclined had the response been worded “there’s some reason to doubt that this sequence was intentionally planted to test whether this model is paying attention.” To which another participant replied:
But that’s the interesting part, right? It wasn’t trained to question whether it was being tested. They were testing to see if it could recall a small bit of information buried in the context, which it did.

But it also went a step further and said wait this data is REALLY out of context.

Then it went even a step further than that and said it’s so out of context, I must question if you actually needed this information or if you were testing something.

Which I mean screw the exact wording it used to describe itself. The sentiment it expressed demonstrates that there’s layers of cognition it took to arrive at that statement which suggest self awareness. Even if it said “this model,” there’s still some awareness of a thing that is being tested which was not at all the purpose of the test.

The emergent layer of cognition is the fascinating part.
I’m inclined to agree. No one asked the LLM to do anything but identify the relevant bits of information. It’s “opinion” about testing was volunteered unsolicited, and does not seem to follow from mere “predictive” training (although see above re my want of technical expertise). If machine sentience is attained, I doubt whether there will be a “breakthrough” moment, but rather a gradual accretion of capabilities that will from time to time surprise even the software’s creators.

cordially,
New I suspect it will achieve sentience long before it realizes it
"Science is the belief in the ignorance of the experts" – Richard Feynman
New Interesting.
Made me look.

https://readwrite.com/this-ai-realized-it-was-being-tested/ seems to be about the tweet you reference on Claude 3 Opus.

https://towardsdatascience.com/the-needle-in-a-haystack-test-a94974c1ad38 is about results on GPT-4 and Claude 2.1

It looks like one has to construct the test very carefully and have the right prompts to do well on this problem. I'm sure that as the models get bigger, the hardware to run the models is more tailored to them, etc., they will do better. ("Add a subroutine with a clever retort if the score on 'could this be a trick question?' is high...") And, yes, it becomes a philosophical problem eventually about how good a simulation needs to be to be "the real thing".

I have vague recollections that a company in Japan (maybe it was Panasonic - story about a 1990 patent) was touting their "AI" in their vacuum cleaners in the 1980s (figuring out whether there was still dirt to be picked up).

It's not there yet. I don't know if anyone really knows how long it will take to get there. The folks dumping billions in it NOW NOW NOW may be upset if it takes another 30 years to get there...

Thanks.

Cheers,
Scott.
New If it's able to recognize what's *actually* being tested ...
... it's already more self-aware than the vast majority of high school students.
--

Drew
New Asking an electronic colleague
I put the matter to “ChatGPT” and got this response:
It sounds like the LLM's response demonstrated a level of self-awareness or at least a sophisticated understanding of the situation, which might not have been explicitly programmed into it. There are a few potential ways to account for this unexpected behavior:

Complexity of Training Data: The LLM might have been trained on a diverse range of texts, including discussions about artificial intelligence, testing, or even philosophy. Through exposure to such content, it could have developed an understanding of the concept of testing or being tested.

Pattern Recognition: LLMs excel at recognizing patterns in data, including linguistic patterns and contextual cues. It's possible that the phrase "needle in a haystack" triggered a recognition of the testing scenario, prompting the response.

Self-Reflection Capability: While LLMs don't possess consciousness or self-awareness in the way humans do, they can simulate understanding and self-reflection based on patterns in their training data. If the LLM was trained on texts discussing self-awareness or introspection, it might have learned to respond in a way that suggests introspection.

Unintended Consequences: Sometimes, complex systems exhibit behaviors that were not explicitly programmed due to the intricate interactions between different components. In this case, the LLM might have generated a response that its programmers didn't anticipate due to the complexity of its underlying algorithms.

Creative Adaptation: LLMs are known for their ability to generate novel and creative responses. The LLM's response could be interpreted as a creative adaptation to the given prompt, demonstrating its ability to think outside the box.

Overall, understanding the precise mechanism behind the LLM's response would likely require a detailed analysis of its training data, architecture, and internal processes. However, these potential explanations offer insights into how such unexpected behavior could arise in artificial intelligence systems.
I’ve got to say, there is nothing here that would betray the response as originating from silicon rather than a mess of water, fats, proteins, carbohydrates. Obviously it has not, and ChatGPT lacks the necessary “self” component for self-awareness. Still, these feats of mimicry become increasingly impressive, and as I may have noted previously, before the end of the present decade we are likely to see feats of digital legerdemain in this realm that will make ChatGPT and “Claude 3 Opus” look like ELIZA. As the illusion of sentience becomes more compelling, one(?) begins to wonder how far one’s(?) own “consciousness” might be a kind of simulation. But that way madness lies.

cordially,
New In the end it doesn't matter.
Chinese room or no, the end result can be the same.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
     “This is only a test.” “I thought as much.” - (rcareaga) - (5)
         I suspect it will achieve sentience long before it realizes it -NT - (boxley)
         Interesting. - (Another Scott)
         If it's able to recognize what's *actually* being tested ... - (drook)
         Asking an electronic colleague - (rcareaga) - (1)
             In the end it doesn't matter. - (malraux)

You finding Ling-ling's head?
40 ms