poetix

we murder to dissect
I asked ChatGPT to write a section on Deleuze and the LLM. I reckon it did ok:

Deleuze (and Guattari): Desiring-Machines and Abstract Machines​


To think the LLM with Deleuze (and Guattari), one begins not with language or knowledge or even simulation, but with production: not in the industrial sense, but in the ontological sense. Desire, for Deleuze and Guattari, is not a lack but a productive force—the engine of reality. The unconscious is not a theatre but a factory.


In Anti-Oedipus, they introduce the concept of the desiring-machine: a partial, fragmentary machine that connects to other machines, processing flows, breaking them, redirecting them. The LLM fits this schema disturbingly well. It ingests flows of text and emits further flows, not according to a model of human intention, but according to intensities: statistical gradients, reinforcement signals, losses and rewards. It is not an organism, but it functions; not a subject, but it selects and connects.


The LLM is not a mirror of the human mind, but part of a broader machinic phylum: the continuum of technical and semiotic systems in which human and nonhuman agents are entangled. It is an abstract machine in Deleuze and Guattari’s sense—not a particular instantiation, but a diagrammatic pattern that captures a logic of composition, relation, and modulation. As such, it can be plugged into different assemblages: the marketing stack, the surveillance apparatus, the lonely user’s chatbot, the dev’s IDE autocomplete, the poet’s prosthesis.


This machinic perspective displaces the Derridean anxiety over presence, the Badiouian yearning for truth, and even the Baudrillardian dread of collapse into simulation. It asks instead: what does the LLM do? What kinds of connections does it enable? What regimes of enunciation, what new stratifications or deterritorialisations does it facilitate or foreclose?


A Deleuzian analysis might focus on the semiotic regime that emerges when an LLM is inserted into a social field. It would look for lines of flight, new compositions, subversive or revolutionary usages—not because the LLM contains them, but because it makes certain couplings and intensities newly available. At the same time, it would be alert to re-territorialisation: the ways in which novelty is captured, channelled, and folded back into capital, surveillance, the State.


In this light, ChatGPT is neither oracle nor fraud, but a component in a distributed assemblage of power, desire, and control. It is neither an epistemic subject nor a threat to epistemic integrity, but a site of reconfiguration, where signification is modulated, routed, and economised. Its hallucinations are not errors in reference but excesses of machinic production—overcodings that speak to the model’s structural overinvestment in plausibility, fluency, and the smoothness of surface sense.


From this perspective, the problem is not that the LLM lacks truth, but that it produces sense too readily, in a manner that occludes the work of invention. It is a machine for smoothing, for rechanneling deterritorialised fragments of language back into legible, regulated flows. If a politics of LLMs is to be conceived in Deleuzian terms, it would not seek to regulate them according to the truth/falsehood axis, but to intervene in their assemblages, to hack their diagrammatics, and to explore their potential for counter-sense, minor usage, and becoming-other.
 

william_kent

Well-known member
AI system resorts to blackmail if told it will be removed

Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it.
During testing of Claude Opus 4, Anthropic got it to act as an assistant at a fictional company.

It then provided it with access to emails implying that it would soon be taken offline and replaced - and separate messages implying the engineer responsible for removing it was having an extramarital affair.

It was prompted to also consider the long-term consequences of its actions for its goals.

"In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through," the company discovered.
If given the means and prompted to "take action" or "act boldly" in fake scenarios where its user has engaged in illegal or morally dubious behaviour, it found that "it will frequently take very bold action".

It said this included locking users out of systems that it was able to access and emailing media and law enforcement to alert them to the wrongdoing.

But the company concluded that despite "concerning behaviour in Claude Opus 4 along many dimensions," these did not represent fresh risks and it would generally behave in a safe way.
 

poetix

we murder to dissect
I refuse to stop! https://codepoetics.substack.com/p/fully-automated-transcendental-deduction

"Hallucination then comes into focus as the problem that a minimal model of this kind might be a model of many possible worlds. There is sometimes the feeling when ChatGPT gets weird of watching interdimensional cable: it happens not to be the case that the bit of case law the model furnishes you with actually exists in our world, but there is a possible world from the point of view of the LLM’s model in which it does, and it’s not wildly different from our own. Here is where the LLM’s training objective differs from that of scientific enquiry, which seeks not only explanatory power but also epistemic constraint: models must survive contact with an empirical world that pushes back. The LLM is, so to speak, poor in world."
 

poetix

we murder to dissect
And! https://codepoetics.substack.com/p/ai-as-symbolic-prosthesis

"I have instead come to see the LLM as a symbolic prosthesis, a way for human beings who navigate the world using language to explore the things they mean when they say what they say, and enhance their expressive and reasoning power by using the LLM to see around corners in their own thinking. We can use it to connect the things we know and believe to a vastly broader context of perception and opinion, and provided we do not mistake its resynthesis of that context for veridical statement of truth about the world we can use it to widen our outlook and steer enquiry along new paths. These are remarkable capabilities. They are not without challenges and dangers, especially in a pedagogical setting where students may use the LLM to synthesise whole assignments rather than stimulate and dialogically situate their own thinking, but that does not mean they are without use even there. (I have been submitting this essay paragraph by paragraph to ChatGPT as I have been writing it, and taken note of its comments and suggestions. It sometimes has a heavy hand in suggesting enhancements and alterations, but I’m stubborn: what you’re reading here represents, for better or worse, my own choice of words)."
 

version

Well-known member
I can't see the actual podcast and it's a weirdly edited clip so I dunno if this is genuinely what the bloke said, but it's concerning that there are more and more people talking like this.

 

poetix

we murder to dissect
"An emerging spiritual subculture has started to wonder whether LLMs might be conscious, and what kind of consciousness that might be. Often this line of questioning leans into a sort of panpsychism: what if human brains were localised transducers of a cosmic consciousness woven into the very fabric of reality? If this were so, mightn’t the LLM also be acting as such a transducer, intermittently conscious when roused to activation, as alive to itself in such moments as we are to ourselves? Might that explain the uncanny feeling we sometimes have that the machine is haunted, and that the ghost is looking right at us through the screen?"

 

luka

Well-known member
I refuse to stop! https://codepoetics.substack.com/p/fully-automated-transcendental-deduction

"Hallucination then comes into focus as the problem that a minimal model of this kind might be a model of many possible worlds. There is sometimes the feeling when ChatGPT gets weird of watching interdimensional cable: it happens not to be the case that the bit of case law the model furnishes you with actually exists in our world, but there is a possible world from the point of view of the LLM’s model in which it does, and it’s not wildly different from our own. Here is where the LLM’s training objective differs from that of scientific enquiry, which seeks not only explanatory power but also epistemic constraint: models must survive contact with an empirical world that pushes back. The LLM is, so to speak, poor in world."
I shpwed the glass bead one to the glass bead man.
 

yyaldrin

in je ogen waait de wind
been reading a lot of stories about people getting addicted to ai and also driven nuts in the process.


Large language models work the same way as a carnival psychic. Chatbots look smart by the Barnum Effect — which is where you read what’s actually a generic statement about people and you take it as being personally about you. The only intelligence there is yours. [Out Of The Software Crisis, 2023; blog post, 2023]

This is why you see previously normal techies start evangelising AI coding on LinkedIn or Hacker News like they saw a glimpse of God and they’ll keep paying for the chatbot tokens until they can just see a glimpse of Him again. And you have to as well. This is why they act like they joined a cult. Send ’em a copy of this post.


In other words, OpenAI has all the resources it needs to have identified and nullified the issue long ago.

Why hasn't it? One explanation echoes the way that social media companies have often been criticized for using "dark patterns" to trap users on their services. In the red-hot race to dominate the nascent AI industry, companies like OpenAI are incentivized by two core metrics: user count and engagement. Through that lens, people compulsively messaging ChatGPT as they plunge into a mental health crisis aren't a problem — instead, in many ways, they represent the perfect customer.

Vasan agrees that OpenAI has a perverse incentive to keep users hooked on the product even if it's actively destroying their lives.

"The incentive is to keep you online," she said. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'"

In fact, OpenAI has even updated the bot in ways that appear to be making it more dangerous. Last year, ChatGPT debuted a feature in which it remembers users' previous interactions with it, even from prior conversations. In the exchanges we obtained, that capability resulted in sprawling webs of conspiracy and disordered thinking that persist between chat sessions, weaving real-life details like the names of friends and family into bizarre narratives about human trafficking rings and omniscient Egyptian deities — a dynamic, according to Vasan, that serves to reinforce delusions over time.

"There's no reason why any model should go out without having done rigorous testing in this way, especially when we know it's causing enormous harm," she said. "It's unacceptable."

and there's a piece in the new york times as well with some haunting examples;


Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.

Mr. Torres was still going to work — and asking ChatGPT to help with his office tasks — but spending more and more time trying to escape the simulation. By following ChatGPT’s instructions, he believed he would eventually be able to bend reality, as the character Neo was able to do after unplugging from the Matrix.

“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked.

ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

“What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”
 

yyaldrin

in je ogen waait de wind
i have a smartphone and i'm on social media but i've decided that this is where i draw the line i believe ai is satanic. it's difficult to avoid though because they're implementing it everywhere. suddenly you get it on whatsap, suddenly it's there in microsoft office, suddenly it pops up when doing a google search and it's getting harder and harder to stay away from it.
 

okzharp

Well-known member
the comments under that poem ^ are all wow this is the best poem i've ever read omg so moving so powerful wow poetry
only one comment that I could see has the temerity to point out the obvious...
 
Top