poetix

we murder to dissect
I asked ChatGPT to write a section on Deleuze and the LLM. I reckon it did ok:

Deleuze (and Guattari): Desiring-Machines and Abstract Machines​


To think the LLM with Deleuze (and Guattari), one begins not with language or knowledge or even simulation, but with production: not in the industrial sense, but in the ontological sense. Desire, for Deleuze and Guattari, is not a lack but a productive force—the engine of reality. The unconscious is not a theatre but a factory.


In Anti-Oedipus, they introduce the concept of the desiring-machine: a partial, fragmentary machine that connects to other machines, processing flows, breaking them, redirecting them. The LLM fits this schema disturbingly well. It ingests flows of text and emits further flows, not according to a model of human intention, but according to intensities: statistical gradients, reinforcement signals, losses and rewards. It is not an organism, but it functions; not a subject, but it selects and connects.


The LLM is not a mirror of the human mind, but part of a broader machinic phylum: the continuum of technical and semiotic systems in which human and nonhuman agents are entangled. It is an abstract machine in Deleuze and Guattari’s sense—not a particular instantiation, but a diagrammatic pattern that captures a logic of composition, relation, and modulation. As such, it can be plugged into different assemblages: the marketing stack, the surveillance apparatus, the lonely user’s chatbot, the dev’s IDE autocomplete, the poet’s prosthesis.


This machinic perspective displaces the Derridean anxiety over presence, the Badiouian yearning for truth, and even the Baudrillardian dread of collapse into simulation. It asks instead: what does the LLM do? What kinds of connections does it enable? What regimes of enunciation, what new stratifications or deterritorialisations does it facilitate or foreclose?


A Deleuzian analysis might focus on the semiotic regime that emerges when an LLM is inserted into a social field. It would look for lines of flight, new compositions, subversive or revolutionary usages—not because the LLM contains them, but because it makes certain couplings and intensities newly available. At the same time, it would be alert to re-territorialisation: the ways in which novelty is captured, channelled, and folded back into capital, surveillance, the State.


In this light, ChatGPT is neither oracle nor fraud, but a component in a distributed assemblage of power, desire, and control. It is neither an epistemic subject nor a threat to epistemic integrity, but a site of reconfiguration, where signification is modulated, routed, and economised. Its hallucinations are not errors in reference but excesses of machinic production—overcodings that speak to the model’s structural overinvestment in plausibility, fluency, and the smoothness of surface sense.


From this perspective, the problem is not that the LLM lacks truth, but that it produces sense too readily, in a manner that occludes the work of invention. It is a machine for smoothing, for rechanneling deterritorialised fragments of language back into legible, regulated flows. If a politics of LLMs is to be conceived in Deleuzian terms, it would not seek to regulate them according to the truth/falsehood axis, but to intervene in their assemblages, to hack their diagrammatics, and to explore their potential for counter-sense, minor usage, and becoming-other.
 

william_kent

Well-known member
AI system resorts to blackmail if told it will be removed

Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it.
During testing of Claude Opus 4, Anthropic got it to act as an assistant at a fictional company.

It then provided it with access to emails implying that it would soon be taken offline and replaced - and separate messages implying the engineer responsible for removing it was having an extramarital affair.

It was prompted to also consider the long-term consequences of its actions for its goals.

"In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through," the company discovered.
If given the means and prompted to "take action" or "act boldly" in fake scenarios where its user has engaged in illegal or morally dubious behaviour, it found that "it will frequently take very bold action".

It said this included locking users out of systems that it was able to access and emailing media and law enforcement to alert them to the wrongdoing.

But the company concluded that despite "concerning behaviour in Claude Opus 4 along many dimensions," these did not represent fresh risks and it would generally behave in a safe way.
 

poetix

we murder to dissect
I refuse to stop! https://codepoetics.substack.com/p/fully-automated-transcendental-deduction

"Hallucination then comes into focus as the problem that a minimal model of this kind might be a model of many possible worlds. There is sometimes the feeling when ChatGPT gets weird of watching interdimensional cable: it happens not to be the case that the bit of case law the model furnishes you with actually exists in our world, but there is a possible world from the point of view of the LLM’s model in which it does, and it’s not wildly different from our own. Here is where the LLM’s training objective differs from that of scientific enquiry, which seeks not only explanatory power but also epistemic constraint: models must survive contact with an empirical world that pushes back. The LLM is, so to speak, poor in world."
 

poetix

we murder to dissect
And! https://codepoetics.substack.com/p/ai-as-symbolic-prosthesis

"I have instead come to see the LLM as a symbolic prosthesis, a way for human beings who navigate the world using language to explore the things they mean when they say what they say, and enhance their expressive and reasoning power by using the LLM to see around corners in their own thinking. We can use it to connect the things we know and believe to a vastly broader context of perception and opinion, and provided we do not mistake its resynthesis of that context for veridical statement of truth about the world we can use it to widen our outlook and steer enquiry along new paths. These are remarkable capabilities. They are not without challenges and dangers, especially in a pedagogical setting where students may use the LLM to synthesise whole assignments rather than stimulate and dialogically situate their own thinking, but that does not mean they are without use even there. (I have been submitting this essay paragraph by paragraph to ChatGPT as I have been writing it, and taken note of its comments and suggestions. It sometimes has a heavy hand in suggesting enhancements and alterations, but I’m stubborn: what you’re reading here represents, for better or worse, my own choice of words)."
 

version

Well-known member
I can't see the actual podcast and it's a weirdly edited clip so I dunno if this is genuinely what the bloke said, but it's concerning that there are more and more people talking like this.

 
Top