#DoomLoops

2025-06-09

The doom loops of generative AI

This is a Claude Opus 4 summary of my recent slide decks and draft writing, in order to help me better understand what I'm trying to say with the notion of 'doom loops'

The concept of a ‘doom loop’ captures a particularly pernicious form of collective action problem emerging from digital transformation. While individual actors make rational decisions to adopt new technologies to solve immediate problems, these decisions aggregate into systemic changes that worsen the very conditions they were meant to address. This dynamic is becoming increasingly visible in professional contexts where generative AI is being rapidly adopted.

Defining the Doom Loop

A doom loop operates through four interconnected mechanisms:

1. Individual Rationality/Collective Irrationality Faced with competitive pressures, individuals adopt technological solutions that provide immediate advantages. An academic using ChatGPT to increase publication output acts rationally given tenure requirements and job market pressures. However, when this behavior scales across the profession, it ratchets up productivity expectations for everyone, creating an arms race that leaves all participants worse off.

2. Temporal Displacement The benefits of adoption are immediate and tangible (reduced workload, increased output), while the costs are delayed and diffuse (degraded professional standards, automated replacement). This temporal structure makes it nearly impossible to resist adoption, even when actors understand the long-term consequences.

3. Infrastructure Capture The platforms and tools that might enable collective resistance become part of the acceleration. Social media platforms that could facilitate professional organization are simultaneously the training data for AI systems. The infrastructure of communication becomes the infrastructure of replacement.

4. Legitimacy Erosion As automated systems take over core professional functions, they undermine the basis for professional authority. When AI can produce academic papers or make diagnostic decisions, it becomes harder to justify why human judgment remains necessary. The profession’s adoption of these tools paradoxically validates their eventual replacement.

The Unbundling Dynamic

The doom loop operates through what we might call ‘functional unbundling’. Rather than wholesale replacement of workers, roles are decomposed into discrete functions, with the ‘routine’ elements automated while humans manage the systems. This appears to preserve employment while fundamentally transforming its nature.

An academic’s role traditionally bundled together research, teaching, mentoring, and service. AI enables these to be separated: automated grading systems, chatbot advisors, AI-generated lecture content. The academic becomes a quality controller rather than an educator. This isn’t efficiency—it’s the systematic stripping away of what Pasquale calls “specifically human powers.”

Acceleration Through Crisis

Financial crises accelerate doom loops by making short-term solutions irresistible. UK universities facing budget constraints see AI as a path to maintaining operations with fewer staff. The 92 institutions currently implementing redundancy programs create perfect conditions for this dynamic: urgent financial pressure meets technological solutionism.

This creates a ratchet effect. Once some institutions adopt AI to cut costs, competitive pressures force others to follow. The baseline shifts, and what was once unthinkable (automated essay grading, AI teaching assistants) becomes standard practice. Each crisis becomes an opportunity to further entrench automated systems.

The Collective Action Paradox

Traditional collective action problems assume actors could coordinate if transaction costs were low enough. The doom loop presents a crueler paradox: the tools that lower coordination costs are themselves part of the problem. Professional communities might organize on LinkedIn or Twitter, but these platforms are training data for the systems that will replace them.

Moreover, the secrecy surrounding AI adoption—Mollick’s “secret cyborgs”—prevents even basic coordination. Without transparency about who is using what tools and how, professional communities cannot develop coherent responses. The shame and uncertainty around AI use atomizes potential resistance.

Breaking the Loop: Theoretical Requirements

Escaping a doom loop requires more than individual resistance or better policies. It demands:

1. Temporal Reframing: Making long-term costs visible and immediate. This might involve professional bodies creating metrics that capture human value rather than just productivity.

2. Collective Standards: Moving beyond individual ethics to professional norms that shape the boundary between human and machine decision-making. This isn’t luddism but careful delineation of where human judgment remains irreducible.

3. Infrastructure Alternatives: Building communication and organization systems that aren’t simultaneously feeding the replacement machinery. This might mean returning to older forms of professional organization or creating new platforms with different logics.

4. Value Articulation: Explicitly theorizing and defending what makes human professional judgment valuable beyond its functional outputs. This means moving past efficiency arguments to questions of meaning, responsibility, and social value.

Conclusion

The doom loop framework reveals how technological transformation can create self-reinforcing cycles of degradation even when each individual decision appears rational. Understanding these dynamics is essential for professions grappling with AI adoption. The choice isn’t between embracing or rejecting these technologies, but understanding how our collective responses shape the futures we create.

The millions of small decisions Pasquale identifies aren’t just about individual tool use—they’re about whether we allow efficiency logics to determine professional futures or insist on preserving space for human judgment, creativity, and meaning. The doom loop isn’t inevitable, but breaking it requires seeing beyond our individual circumstances to the collective dynamics we’re creating.

#diffusion #doomLoops #LLMs #organisation #organisationalSociology #technology

Matt WillemsenNonog@fedibird.com
2023-07-06

Catastrophic climate 'doom loops' could start in just 15 years, new study warns
Climate "tipping points," such as the loss of the Amazon rainforest or the collapse of the Greenland ice sheet, could come within a human lifetime, scientists have said.
livescience.com/planet-earth/c #DoomLoops #climate #15Years

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst