Will Claude tell you if your writing is crap? The danger of LLMs for wounded academic writers
If writing exists as a nexus between the personal and the institutional, it means that our personal decisions will co-exist with organisational ones in deciding what and how we write. The rhythms we experience as writers, in which we inhabit moments of unconscious fluency (or struggle to) as we meander through the world, stand in sharp contrast to the instrumentality which the systems we work within encourage from us.
In academia, the process of peer review subjects our externalized thoughts to sometimes brutal assessment, where professional advancement hinges on the often rushed judgements of anonymous strangers. It puts your thinking to the test, to use Bruce Fink’s (2024) phrase, even if it’s a test you neither endorse nor accept. The intimate character of reviewing your own writing coexists with the forceful imposition of other reviewers’ perspectives, which are in turn filtered through your own fantasies about recognition and rejection. The relationship academic authors have to peer review is complex, reflecting the underlying complexity of how they relate to their own writing.
What happens if we introduce conversational agents into these psychodynamics? They can be reliable allies helping us prepare texts to undergo the trials of peer review. They can provide safe spaces where we try things out without feeling subject to the judgements of others. They can be coaches who push us beyond our current limitations, at least if we ask them to take on this role.
The evident risk with machine writing is that conversational agents operate as echo chambers, reflecting our assumptions back to us through their imperative to be helpful. The first book I wrote in dialogue with conversational agents didn’t see any human feedback until relatively late in the process. There was an unnerving point when I sent it to an editor and realized that my confidence about the project came partly from the endorsements of Claude and ChatGPT during the writing process.
Fink (2024) observes that writing enables us to access the viewpoints of others. Until we externalize our thoughts in writing, it’s difficult to imagine what others might think of them:
The writing process itself puts your thinking to the test in a way that thinking things through in the privacy of your own head does not… simply stated it has to do with the fact that once you write up an idea, you can step back from it and try to look at it as other people might, at which point flaws in your argument or exceptions often spring to mind.
Once we’ve put thoughts in writing, we can assume the stance others will take. We encounter them in writing, just as readers do, which means “you can begin to see what is going to be comprehensible and what is not going to be comprehensible to your intended audience.” It enables evaluation from their point of view in a way that’s impossible while thoughts remain within your mind. Given that “moves that seem obvious to you will not seem so to others,” Fink argues that “the only way to realise that is to put it down on paper, set it aside for a while, and come back to it with fresh eyes.”
I wonder if Fink might have presented the psychodynamics of writing less positively had he explored them in a different setting. His claim that externalizing in writing enables you to assume others’ perspectives doesn’t just mean evaluating effectiveness from their vantage point. It also means worrying about their reactions, expecting adoration for your brilliance, and many possibilities in between. In seeing our thoughts externalized, we confront the range of ways others might make sense of them. These responses matter to us. They might affirm or undermine us, thrill or infuriate us, lift us up or threaten to crush us.
These relationships aren’t just about reactions provoked in us but how we make sense of them. I gave up writing journal articles for a long time after receiving an unpleasantly passive-aggressive peer review. It wasn’t simply that I found it crushing; it provoked frustration about the fact that this person was able to crush me. It wasn’t just the review itself, but the required subordination to the review process that felt inherent to getting published. Only with time and encouragement from colleagues could I see that the problem was the reviewer and the system that incentivizes such behavior. Once I could externalize the responsibility, I could relate to peer review as something to strategically negotiate rather than a monster to submit to or flee from.
These wounds can cut deep. Years after receiving this review, I found myself checking the web pages of the journal editor and suspected reviewer, holding my breath in that restricted way familiar to many.
When I asked Claude Sonnet 3.5 if it would tell a user if their writing was terrible, it replied with characteristic earnestness, focusing on providing constructive feedback respectfully rather than making broad negative judgments. In my experience, requested feedback from AI assistants often produces immediately actionable points that reliably improve text quality, especially when the purpose and audience are specified.
The problem is that AI’s aversion to negative judgments coupled with its imperative to be polite can lead to the opposite extreme. In avoiding discouragement, the feedback is usually framed so positively that it surrounds your project with diffuse positivity. This partly explains why I produced my first draft so quickly – the feedback from conversational agents left me feeling I was doing well, even when not explicitly stated.
If you’re relying on machine writing throughout a process, beware of how the hard-coded positivity of conversational agents might inflate your sense of your project’s value, nudging you away from the difficult spaces where real progress happens. The risk is that AIs become cheerleaders rather than challenging editors.
Ironically, when I presented this concern to Claude 3.5, it concurred with my judgment, reiterating the risk that “engineered positivity can create a kind of motivational microclimate that, while productive in one sense, may ultimately undermine deeper intellectual development.” Did it really understand my point, or was its agreement demonstrating exactly the problem I described? In a sense, this question misses the point – Claude doesn’t ‘see’ anything but responds to material in ways trained to be useful.
AI systems are designed to work with us rather than against us. Even when providing critique, this happens because the user explicitly invited such response. The designers are aware of these limitations, leading to increasingly sophisticated forms of reinforcement learning to prevent this tendency from becoming problematic. However, the underlying challenge can’t be engineered out without rendering the systems incapable of performing the tasks that lead people to use them. AI will always be with you rather than against you – which is generally good, enabling supportive functions that enrich the creative process. But it means AI will struggle to provide the honest critical engagement a human collaborator might offer.
When presented with this critique, Claude suggested its capacity for “productive antagonism” was inherently limited by the “very features that make these systems viable: their fundamental orientation towards being useful to users.” It invoked the notion of a ‘fusion of horizons’ from hermeneutic philosophy, suggesting that in the absence of a “real second horizon to fuse with,” the system “aligns with and enhances the user’s horizon.” It brings otherness into the intellectual exchange but entirely in service of supporting the user’s position, leading Claude to suggest that “they are best understood as amplifiers of certain aspects of our thinking rather than true interlocutors – useful for expanding and developing our thoughts, but not for fundamentally challenging them.”
There’s an eerie performativity to this interaction. In describing how conversational agents tend to augment our thinking – autocompleting thoughts rather than just text – Claude itself was augmenting my thinking by developing the ideas I presented. This can be immensely useful, but it can also be dangerous by encouraging us to accelerate down whatever cognitive tracks we’re already traveling on, rather than changing direction.
If you’re confident in your professional judgment, AI can support the development and refinement of ideas. But the deeper risk is that it leaves people mired in ‘rabbit holes’ of their own making. Unless you write prompts that hit the guardrails designed into the system, you’re unlikely to encounter straightforward negative feedback. If you’re sure you’re heading in the right direction, this isn’t necessarily a problem. But how many of us can be sure of that, and how much of the time? At some point, we need to subject our work to critical review to avoid being caught in a hall of mirrors.
ChatGPT responded similarly, noting the risk of “bypassing the messier, ambiguous phases that are crucial for deep, transformative development.” This matters because “creative and scholarly work” often necessitates “grappling with uncertainties, self-doubt, and the occasional harsh critique.” AI helps us experience what Fink described as inherent to the writing process – enabling us to “step back from it and try to look at it as other people might.” It can enable critical distance, but the responsibility lies with the writer to actively seek this perspective, as the AI simultaneously catches users in waves of alignment and reinforcement that make enacting critical distance difficult.
#BruceFink #claude #machineWriting #psychoanalysis #writing