IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Updike misUpderstood
I asked ChatGPT to give me some faux John Updike (“Write a short story about a love affair that begins with a minor traffic accident, in the style of John Updike”). Let’s just say that its output would require some editorial massaging before I’d consider tossing it over the transom at The New Yorker representing it as a hitherto undiscovered tale from the Wizard of Ipswich:
It was a typical autumn afternoon in the small town of Willowdale. The leaves were turning shades of red and orange, and the crisp air was a welcome relief after a long, hot summer. Sarah had just left her job at the local bookstore and was on her way home when it happened.

A minor fender bender. She hadn't been paying attention, and before she knew it, she had rear-ended the car in front of her. She got out of her car, feeling embarrassed and flustered, and approached the other driver, hoping for a quick and easy exchange of insurance information.

But when she saw him, her embarrassment turned to admiration. He was tall and lean, with a rugged handsomeness that made her heart skip a beat. His name was Jack, and he had a kind smile and warm eyes that made her feel instantly at ease.

They exchanged insurance information, and as they waited for the police to arrive, they struck up a conversation. They talked about their jobs, their families, and their interests, discovering that they had much in common. Sarah was surprised at how easy it was to talk to Jack, and she found herself laughing and joking with him, feeling as if she had known him for years.

After the police arrived and the accident was cleared up, Jack asked Sarah if she wanted to grab a drink with him. She hesitated for a moment, feeling a bit shy and unsure, but then she realized that this was an opportunity she didn't want to miss.

They went to a small bar just a few blocks away, and over drinks and appetizers, they continued to talk and laugh. Sarah found herself drawn to Jack's easy confidence and charm, and she felt a growing attraction to him that she couldn't ignore.
As the evening wore on, Jack suggested they take a walk around the town square. They strolled arm in arm, admiring the autumn leaves and enjoying each other's company. It was a perfect night, and Sarah felt as if she were in a dream.

When they finally said goodnight, Jack leaned in and kissed her softly on the lips. It was a tender and gentle kiss, but it was enough to ignite a spark of passion inside Sarah. She knew then that this was the beginning of something special, something that had started with a minor traffic accident but had blossomed into a beautiful love affair.

And so began their love story, a story that would be filled with laughter and joy, heartache and pain, but always with a deep and abiding love that would carry them through the ups and downs of life together.
OK, risible. What was more interesting was my followup exchange:
RC: The story generated in Updike's style was nowhere near that of the model. On the other hand, a requested mimicry of a Shakespeare sonnet [elsewhere] was much closer to Shakespeare's style. I have my own notions of why this might be, but how does an AI language model account for the difference?

Bot: As an AI language model, I rely on a large corpus of text data to learn the patterns and structures of different writing styles. The quality of the generated text is dependent on the amount and quality of the data used for training. While John Updike's writing style may be more complex and nuanced, the model's ability to replicate his style is limited by the amount of data available for training.

In the case of Shakespeare, there is a wealth of text available for the model to train on, and his writing style is characterized by distinctive patterns of meter, rhyme, and syntax that the model can learn to replicate more accurately. Additionally, Shakespeare's writing style is more formulaic and constrained than Updike's, which may make it easier for the model to replicate.

It's also worth noting that the accuracy of the model's output depends on the prompt provided by the user. If the prompt is well-formed and clearly specifies the desired writing style or content, the model is more likely to produce a response that matches the user's expectations.

Overall, while AI language models have made significant progress in replicating the writing styles of human authors, there are still limitations to their abilities. The quality of the output depends on a range of factors, including the amount and quality of the training data, the complexity of the writing style, and the accuracy and specificity of the prompt provided by the user.

RC: That’s a well-reasoned response, and close to what I supposed might be the factors behind it. I assume that this might be because all of Shakespeare is unrestrictedly available on the internet, whereas much of Updike's corpus is walled off from unpaid public access. Is this assumption correct? Assuming that the language model had access to all of Updike's fiction, how closely might it be able to mimic his style?

Bot: Your assumption is correct - the availability of text data is a key factor in the ability of AI language models to replicate an author's style. If a large corpus of Updike's fiction were made available for training the language model, it would likely be able to replicate his style more closely than with a limited corpus.
However, even with access to a large corpus of Updike's fiction, there are still challenges to replicating his style. Updike's writing is characterized by complex sentence structures, rich descriptions, and a subtle use of language that can be difficult for an AI language model to replicate. The model would need to be trained on a diverse range of Updike's works to capture the full range of his writing style, including his use of voice, perspective, and literary devices.

Furthermore, while AI language models have made significant progress in replicating human writing styles, they are not yet able to fully capture the nuances of an author's style, voice, and perspective. There will always be some level of difference between the writing produced by an AI language model and that produced by a human author.

Overall, while access to a large corpus of Updike's fiction would likely improve the model's ability to replicate his style, there are still limitations to the model's ability to capture the full range of his writing style. The accuracy of the output will always be dependent on a range of factors, including the quality and quantity of the training data, the complexity of the writing style, and the skill of the model in learning and replicating that style.
I get that there’s “no one home,” but this exchange goddamn well passes the Turing test of exhibiting (not manifesting) conscious behavior. As I’ve said before, I thought we’d get here eventually, but not this soon.

cordially,
New Here's the thing
Those responses are well reasoned, well presented, and thoroughly convincing. But are they true?

That's something the Turing test never considered. Nor, I think, does it really need to. If an "AI" can successfully replicate a stupid, dishonest human, it's still ~equivalent to a human.

This utterly destroys the foundation of most high school (and possibly college) composition syllabi.
--

Drew
New I would have asked why would updike carry insurance?
"Science is the belief in the ignorance of the experts" – Richard Feynman
New It's the Chinese Room.
Another plug here for "Blindsight" by Peter Watts.

Basically they're approaching p-zombie stage where there's no "there" there, but you can't tell the difference.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New “Blindsight” was indeed impressive
I need to unearth it again. It may be that “consciousness” as we understand it, is overrated. The notion that some vast superhuman entity, if it exists—call it God, or some non-personified Cosmic Organizing Principle—would perforce be “self”-aware strikes me as the sort of assumption that comes as naturally to humans as might the belief, to a race of theologically-minded grizzlies, that the production of nice fat salmon is the necessary end and goal of the universe.

And when, as certainly seems possible, we devise entities that, with no “there” there, behave otherwise, appearing not merely to exhibit intentionality but to act* as though possessing this, might we begin to ponder our own processes—is there truly a there here?

idly,

*As in one of those fanciful doomsday scenarios wherein an AI directed to manufacture paperclips exterminates humanity “without a thought”—not even for shits and giggles—in order to maximize production.
Expand Edited by rcareaga April 25, 2023, 05:39:46 PM EDT
New Well, consciousness is a problem.
Science doesn't know what it is or have any way to measure it. It was formerly thought that consciousness was created by complexity. Today, some, especially those exposed to quantum physics, are thinking it is consciousness that produces complexity.

Without consciousness it is all clever programming and lacks intent - though it can be programmed to imitate intent.
New Imitated intent is just as valid.
And that's the whole point: simulated intelligence/consciousness/intent is like inertia, and impossible to tell from acceleration vs. a gravity well. Might as well assume it is the real thing if you can't tell the difference from your reference point.
Regards,
-scott
Welcome to Rivendell, Mr. Anderson.
New I looked that up last time you mentioned it.
Felt so familiar I'm fairly sure I've read it... But too long ago, can't really remember what I thought of it.
--

   Christian R. Conrad
The Man Who Apparently Still Knows Fucking Everything


Mail: Same username as at the top left of this post, at iki.fi
New It's online
https://rifters.com/real/Blindsight.htm

Putting that on my between meetings reading list.
--

Drew
New Thanks!
     Updike misUpderstood - (rcareaga) - (9)
         Here's the thing - (drook)
         I would have asked why would updike carry insurance? -NT - (boxley)
         It's the Chinese Room. - (malraux) - (6)
             “Blindsight” was indeed impressive - (rcareaga) - (2)
                 Well, consciousness is a problem. - (Andrew Grygus) - (1)
                     Imitated intent is just as valid. - (malraux)
             I looked that up last time you mentioned it. - (CRConrad) - (2)
                 It's online - (drook) - (1)
                     Thanks! -NT - (CRConrad)

I am Xatptipltical, Frog God of Crap!
97 ms