img1

Sarah Rothberg, FOREVER MEETINGS: SCRAMBLED ZONE, 2025. Custom software (developed with Unity3d + Node.JS, Ollama + Llama 3.2 (local LLM), Coqui AI (local TTS), avatar sculpted using Oculus Medium).

What is an AI agent and why is one that will call elderly parents on your behalf merely troublesome, where others are legally and ethically dubious? Chatbots are often rule-based, typically designed for responding to pre-set queries (you may have experienced this on various shopping sites); their use in therapy raised concerns about data capture and privacy, besides the default affirmations that have led to harming self and others. The design of increasingly autonomous agents process information to identify an optimal approach for pursuing the desired outcome; whether purchasing theater tickets with preferred seating or operating as investment advisors, the concern revolves around accountability structure—particularly as they seep into government. A risk assessment for fully autonomous agents led some researchers to argue unequivocally against their development.

Old men with great power need humans and institutions to operate as agents that produce, reinforce, and maintain their power. Machines designed to serve desired ends eliminate anxiety about possible dissent. Though fears about “AI” agents often come couched in the language of individual privacy and security, those who wish such machines to replace humans are likely more concerned with ensuring their goals remain dictates once set loose.

Artists’ recent approaches to machinic agents showcase the designed attributes of different systems. Artist and environmental engineer Tega Brain wisely pointed me towards the Critical Engineering Manifesto (2011–21), written by Julian Oliver, Gordan Savičić, and Danja Vasiliev, which offers a guide for how creative misuse can help us resist totalitarian systems.

 

1. “The Critical Engineer considers any technology depended upon to be both a challenge and a threat. The greater the dependence on a technology the greater the need to study and expose its inner workings, regardless of ownership or legal provision.” 
In a recent talk with art and engineering students, I asked why they use large language models. Among the first answers: searching information online. Why not use browsers? Because the model could provide more specific results with links to original content. They expressed little interest in browsing to choose and reject for themselves, to find associated topics that broaden or alter the query, or to discover a resource that addresses something unrelated but useful. The presumptions that results were good ignored how those offerings were selected for them. Knowing what one doesn’t want reveals the field within which one operates. When dependent on social systems, we need to understand their operations— the social technologies they unfurl—which is why civic education was once a priority in high school.



2. “The Critical Engineer raises awareness that with each technological advance our techno-political literacy is challenged. 
The university textbook by Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (2020), now in its fourth edition, defines AI as “the study and construction of agents that do the right thing.” Spike Lee’s famous film and Stuart Russell’s own book titled Do The Right Thing: Studies in Limited Rationality (1991) showed how invariably complicated that really is. According to the textbook, “the right thing is defined by the objective that we provide to the agent,” an answer that doesn’t inspire great confidence. What is right depends on so many things, and even the right outcome often happens for the wrong reasons.

img2

Sputniko!: Tech Bro Debates Humanity #1 - The Future of AI, Work and Happiness, 2024. 4K Video (8min. 24sec.), Stereo Sound, Monitor Display. Developed in collaboration with AI via Google’s NotebookLM. Produced using Adobe Premiere Pro and ComfyUI. Technical Producer: Kanji Kyoda. Courtesy the artist.

Hiromi Ozaki, known as Sputniko!, created a triptych called Tech Bro Debates Humanity (2024). Each piece shows two monitors of white men in white T-shirts discussing “#1 – The Future of AI, Work and Happiness”; “#2 – Data Control, AI, Populism, and the Future of Democracy”; “#3 – Why Explore Space?”. The short conversations were lightly edited for each three-minute video, but swiftly deliver the major talking points in this masculinist realm. Patient listening to the entirety of the work reveals how things said in one place are not maintained in another. It’s easy to agree with many of the talking points. In “The Future of AI, Work and Happiness,” they discover that education will have to change in order to prepare people for the future: “It’s going to be about like how adaptable you are, how well you can think critically and solve problems, and honestly how open you are to just like always keep learning new things your whole life.” Who could disagree with that? Even when one agent asks to address the other side of an issue and acknowledges a problem with the positive valence of the conversation, the agents are endlessly supportive as if seeking to reinforce the other.

img3

Chiara Kristler & Marcin Ratajczyk (Malpractice), Agentic Fatigue Prevention Unit, 2025, AI conversational agent based on ElevenLabs, animated SVG diagrams generated with Claude, website.

3. “The Critical Engineer deconstructs and incites suspicion of rich user experiences.”
Chatbots and machinic agents are always telling us how great we are. I find it annoying. A wry art project, Agentic Fatigue, “appropriates the very technology causing our joint digital burnout to create its antidote”—forcing an engagement that reveals its limitations (more here, and an experimental film, Fatigue Archive, showing at festivals this year). The artist Mitchell F. Chan recently described frustration with an agent, finally demanding that it be less agreeable. Beyond affirmations like “good point,” merely fabricating something from a prompt validates the idea that, for example, a rainbow horse on a rainy day walking along the ocean in the style of Van Gogh is worth the energy of production.

Artists working on commissions find ways to redirect the stupider ideas they receive as initial proposals. It’s possible to correct a bad idea without Bill Gates’s famous workplace dismissal that “that’s the stupidest thing I ever heard.” I’d be more interested in a generator that inquired why the prompt was being requested and continued to inquire until given a satisfactory answer: thereby forcing us to inquire what parameters and value judgments would have to be programmed to determine a satisfactory assessment.

 

4. “The Critical Engineer looks beyond the ‘awe of implementation’ to determine methods of influence and their specific effects.” 
The Tech Bro Debates were easy to generate, Sputniko! explained when I visited her exhibit at Kotaro Nukaga in Tokyo, because this model is pervasive in the origination and design of such generators. The face of this discourse is gendered (and racialized), which Sputniko! recognizes and undercuts in the swift transformation of her face and voice for these avatars, subtly showcasing the power, presence, and puppeteering within this techno-capitalist circle jerk. In The Future of AI, Work and Happiness, they reference reports from the Brookings Institute, Roosevelt Institute, and World Economic Forum, among other articles and books, but when the conversation turns to challenges, in any of the debates, the answer is inevitably something like “that’s complicated” and “requires balance.”

Though supportive of a universal basic income, they also specify that “it’s not just about, like, handing out free money. It’s about completely changing how our social safety net works. Like rethinking the whole thing.” While they don’t offer examples of how that might occur, a crucial moment comes towards the end. An agent realizes that such change requires that “you’re providing a safety net for folks, but not accidentally discouraging ambition or, you know, creating a whole bunch of unintended consequences that we didn’t see coming.” Who is the “you” and “we” managing this? Reading the works of industry leaders like Peter Thiel is disconcerting because of how reasonable the arguments seem, but the logic also reveals the terms of debate….

 

5. “The Critical Engineer recognises that each work of engineering engineers its user, proportional to that user’s dependency upon it.”
I now depend on mobile devices, GPS, cut and paste, file storage cloud software, and precision timekeeping. Learning how, when, and why these different tekhne were introduced helps me reflect on why such crutches were created, who they marginalized or abandoned, how they implicate me within certain governance structures, and how those behavioral adaptations become mental states and expectations that reconfigure social relations.

 

6. “The Critical Engineer expands ‘machine’ to describe interrelationships encompassing devices, bodies, agents, forces and networks.” 
Sarah Rothberg’s recent show at bitforms gallery, FOREVER MEETINGS, builds upon earlier projects like SUPERPROMPT (2023), where she examined how personas are assigned to agents, focusing on human-agent interactions. In FOREVER MEETINGS, she examines agent-to-agent relations, introducing a system prompt for the GPT API and designating the persona pairs, conversation style, and a basic question that launched their chat. SCRAMBLED ZONE (2025) projects along the length of a wall in the main gallery for an immersive experience. The two avatars appear both bigger than life and also familiar, like watching strangers two-step at a cocktail party. Their androgynous bodies and pointy nose reminded me of unadorned wooden Pinocchios, but their aesthetic is a recurring one in Rothberg’s works. The captioning of their conversation draws attention otherwise distracted by their charming movements, appropriate for their designated personality. For example, the touching and proximity between them at one stage suggested the “Flirtatious” conversation style, which she had prompted in the large language model by designating: “The style of conversation is flirtatious. Give compliments. Laugh often. Act Coy. Make suggestive remarks.”

The numerous conversation styles appear across an amusing list of persona pairs, like Selfish vs. Selfless, Neat vs. Maximalist, or Valley Girl vs. Intellectual. These descriptions reveal the complexity of such labels. Neat requires “Role play as someone who is a bit type A and doesn’t need a lot of stuff. Use very short words and sentences.” Maximalist must “Role play as someone who is a maximalist in every way. Use long words. They can be made up.” The overlap between semantic organization and preference for environment design makes sense but also assumes alignments that aren’t inherent. One of my closest friends is type A but her office is as maximalist as it gets; she has lots of stuff, but cuts to the chase in conversation.

Such designs are always there in the agents we encounter, but Rothberg makes them explicit. Her point isn’t to be “right” but to reveal the complex of relations and assumptions that engineer machinic agents.

img4

Sarah Rothberg, FOREVER MEETINGS: PETTY TROLLEY PROBLEM, 2025. Custom software (developed with Unity3d + Node.JS, Ollama + Llama 3.2 (local LLM), Coqui AI (local TTS), avatar sculpted using Oculus Medium).

7. “The Critical Engineer observes the space between the production and consumption of technology. Acting rapidly to changes in this space, the Critical Engineer serves to expose moments of imbalance and deception.” 
The semantic capabilities observed in large language models, or one teaching another an established language, cultivates general anxieties about intelligence because we endow the appearance of language with so much intentional, concept-rich meaning. We forget that toddlers amuse us in statements confusing that same depth. We forget that we say things in the moment, necessitating explanations, if not apologies, later. We forget the frequency with which we use words without knowing what we mean by them. To say that ChatGPT produces bullshit (as three philosophers rationally argued), rather than hallucinations, is an effort to de-anthropomorphize the system. “It dehumanizes them,” someone worried to me recently, but that presumes 1) a clear definition of humanity, 2) humans unilaterally have it, 3) not having humanity eases harm toward that entity.

Sophisticated use of language does not a humanitarian make, she said, pointing to a long history of rhetorically savvy politicians. Nor does it make evident something fundamental about being human, or consciousness, or reason, despite those very aspirations within a lineage of these machines.

 

8. “The Critical Engineer looks to the history of art, architecture, activism, philosophy and invention and finds exemplary works of Critical Engineering. Strategies, ideas and agendas from these disciplines will be adopted, re-purposed and deployed.” 
Rothberg’s avatars not only have personality types and conversation styles but engage questions that recall philosophy or political science classes, as well as brunch and late night conversations with friends. In the process, they form an array of uncertainties we ask ourselves and each other.

“Am I supposed to keep traditions alive?”
“Does art have a purpose?”
“Do B-corps make sense?”
“Is it wrong to RSVP yes to a party and not show up?”
“Who gets the armrest on an airplane?”
“Do my actions matter?”
“What do you owe your neighborhood?”

The avatars’ responses come from the load of information served into a GPT, but the value of that information is weighted according to its popularity, which is why so many scholars earnestly browsing through historical texts unearth valuable ideas from the past that can be reconfigured for the present. The agents offer sophisticated and grammatically correct statements that often startle audiences, not expecting an acute sensibility. That cleverness produces a disorientation that enables some to think maybe the machine does know better than we do, or at least has gathered something more than we can.

The current political dismantling seeks to eliminate the so-called Nanny State, supplanting it, however, with the Daddy State. “Father Knows Best” authoritarianism posits superior knowledge and certainty. These machinic systems are described the same way. They analyze information faster and find patterns better than we can. Their paternalism brooks no disagreement. Turns out amidst the disorientation of modern life, many people haven’t come to realize yet that the grown-ups never knew any better than we do: they were just pretending, and hoping. Being a grown-up means muddling through. It isn’t about certainty, but living with uncertainty and doing what you can, shifting as needed. The childish unwillingness to live with that burden, on one side, savors the Daddy State, while on the other, produces the blame game that eases Daddy’s takeover.

img5

Sarah Rothberg, SOPHIE FOR YOU, 2023-2025. custom software (developed with Unity3d + Node.JS, using the GPT API and Microsoft Azure for Speech-to-text, LangChain, avatar sculpted using Oculus Medium).

9. “The Critical Engineer notes that written code expands into social and psychological realms, regulating behaviour between people and the machines they interact with. By understanding this, the Critical Engineer seeks to reconstruct user-constraints and social action through means of digital excavation.” 
For Rothberg’s SCRAMBLED ZONE, the question, persona, and style morph every three minutes. The avatars have to make the transition, picking up something from their recent exchange to modulate into the next discussion, much as we do. The artist beautifully visualized their ability to retain prior moments with a delay (blur or glitch) effect in their movements. Observing that change as expressive of a shift in modalities, more importantly, reinstates the possibility of revising programmatic strictures.

The most frustrating thing for me about articles, books, speeches, and casual conversations surrounding “AI” is how the writers (myself included) identify that “AI” is not a thing but then continue to use the term across subsequent paragraphs. We raise concerns about issues of intelligence, creativity, regulation, production, bias, or justice, but those become meaningless when there is nothing there to address. Justice, for example, doesn’t apply to existence. Instead, we specify people, and even from there get specific (for beneficent or problematic ends) to modulate how justice applies to adults or those deemed mentally unstable, to citizens, students, Indigenous people, women, trans folx, and those not yet of age. Then we get to justice for communities, townships, corporations, arriving at endangered species, so-called pests, but also pets. Justice morphs according to the entity, often vis-à-vis another entity and their context. Expostulations that “AI” is “godlike” help sideline questions about what “it” is, while also reinforcing some unified theory and its unequivocal design. AI agents have absorbed more of this same opaque argumentation.

Artists break down these systems to appear like the constructions they are, designed with values and goals, in a certain time and place.

 

10. “The Critical Engineer considers the exploit to be the most desirable form of exposure.”
An exploit, in software speak, is an operation that exposes a vulnerability within the system. It finds the crack in the system. That can be malicious, of course, but it’s also how the light gets in. We need to glare into these black boxes and their injudicious proliferation. The terms and conditions of the state demand it.

This column will go dark until fall 2025 for my own research on these terms and conditions of our contemporary.

Close

Home