#machineWriting

2025-04-19

Machine Writing and the Pleasure of Composition

Until recently, Peter Elbow’s observation that one cannot write prolifically without finding some pleasure in the process captured a profound truth of the writing experience. If you hated writing it would be difficult to write a lot, for the simple reason that it would be impossibly time consuming to produce a significant quantity of words unless you could in some sense relax into the process. If you truly hated what you were doing, writing would be a sufficiently arduous activity as to render sustained engagement in it nearly impossible. This is not to deny that academic writers sometimes hate writing, let alone that they might often struggle with it. The ubiquity of texts which intend to support academics through the writing process speak to this obvious need. But if this is a persistent condition they are unlikely, to use Elbow’s term, to “write a lot” and will be prone to forms of blocking which render any writing at all impossible for periods of time.

What has changed is how LLMs can intervene in this process through the affordances of machine writing. I provided Claude 3.5 with these two paragraphs and asked it to complete this paragraph for me, starting from the previous sentence. It’s fair to say that it perfectly captured what I was trying to say, offering me a short paragraph which perfectly mimicked what I had intended to say before I had actually said it:

“The relationship between enjoyment and output which Elbow identified has been fundamentally disrupted by these systems. It’s now possible to write prolifically while hating every moment of it, outsourcing the arduous labour of composition to machines while retaining editorial control over the final product. This technological capability raises profound questions about the future of academic writing and the role that pleasure and satisfaction will play within it. If we separate writing from enjoyment without sacrificing productivity, what becomes of Elbow’s insight about the necessity of pleasure for sustained creative work? These are the questions which this book seeks to address.”

Indeed. To what extent am I addressing these questions? In this case I’ve provided Claude with a substantial extract of my own writing, supplemented by a large quantity of draft writing and notes provided as background knowledge which ensures its capacity to enter into my project, argument and style. Is it merely inferring what I might say from what I’ve already said? Is this simply autocomplete expanding from sentences through to paragraphs? Or is the LLM an active participant in the ideational process? Is it to some extent co-authoring the text with me? What does this mean for how I relate to my own writing? These are profound questions which touch upon fundamental issues of authoriality and creativity which long precede the introduction of LLMs into academic writing (Hall 2016, Braidotti 2019). But they are ones which will soon become routine features of the academic writing experience, to the extent academics draw upon the affordances of LLMs in their writing process.

To use machine writing effectively is far more challenging than simply throwing out instructions to the LLM to produce a paragraph on a particular topic. This is what Claude 3.5 produces when I ask it to write two sentences about the difficulties of academic writing, as a single sentence instruction with no other detail provided, in a separate conversation which didn’t have access to the background knowledge I had provided in this project:

“Academic writing demands a delicate balance between conveying complex ideas with precision while maintaining clarity for readers who may not be experts in the field. Scholars often struggle to strike this balance, finding themselves caught between the need to demonstrate sophisticated understanding through field-specific terminology and the equally important goal of making their work accessible to a broader audience.”

Even if the syntax and grammar are beyond reproach, this is a strikingly generic statement about academic writing. It implies that all academic writing involves a tension between writing for a specialised audience and a generalised audience, which is obviously untrue as an empirical generalisation. In contrast if I make the same request with my aforementioned body of draft text and notes on hand, the response is only slightly less generic:

“Academic writing often feels like a high-stakes balancing act between demonstrating scholarly rigor and maintaining an engaging narrative that will hold readers’ attention. Beyond the intellectual challenges of developing and articulating complex ideas, academics must also navigate institutional pressures, disciplinary conventions, and the constant worry that their work won’t meet the field’s ever-shifting standards of what constitutes a meaningful contribution.”

Once offered a far more substantial prompt, the results improve immeasurably. Rather than simply asking Claude to “write me two sentence about the difficulties of academic writing” I provide it with a far more detailed description of what I was asking for, including information about the context in which I was making the request: “In the opening chapter of my academic book about AI and academic writing, I’m trying to illustrate the capacity of LLMs to produce writing which matches the intentions of the academic author, as well as how this might alleviate or complicate the challenges of academic writing. Please write me two sentences which speak to this point, in a manner which best illustrates the full extent of your writing capabilities.” In response the model produced something which while unusable in some respects, as it inferred from the request to impress the reader that I wanted vastly overcomplicated sentences, certainly demonstrated a capacity to write in a way appropriate to the project:

“While large language models can effortlessly generate fluent academic prose that adheres to disciplinary conventions and stylistic norms, their capacity to do so potentially transforms the relationship between academics and their writing – offering both liberation from certain technical challenges and an unsettling disruption of the intrinsic satisfactions traditionally found in wrestling complex ideas into written form. The ability of these systems to rapidly produce text which matches our intentions while maintaining scholarly standards of argumentation and evidence creates a strange new dynamic in which the arduous yet rewarding process of academic composition threatens to be displaced by a frictionless form of intellectual outsourcing, leaving us to question what role human authorship and creative struggle should play in scholarly writing.”

It presented these with an explanation that it had tried to “demonstrate sophisticated academic writing while directly engaging with the intellectual and experiential tensions at the heart of your project.” This illustrates how it was already working with an understanding of the project, based on the knowledge base I had provided it with. Even with this significant amount of context, it would revert to a fairly generic output if I provided it with a simple and nondescript prompt.

As a general principle I’d suggest that any one sentence prompt is unlikely to be effective for academics who are trying to make an intellectually sophisticated use of a model. Even though frontier models are advancing in their capacity to infer from a succinct user request what they might be looking for, it’s still inferring an understanding of what it is you’re looking for on the basis of insufficient context. The only way to provide that context is ultimately to write, which complicates the instinct to draw upon models as a means to avoid or accelerate academic writing.

This creates a fascinating paradox at the heart of AI-assisted academic writing: to effectively use tools that promise to reduce our writing burden, we must first engage in substantial writing ourselves. Those who approach AI writing tools hoping to bypass the difficult work of articulation may find themselves producing generic, shallow content.

#academicWriting #generativeAI #machineWriting #writing

2025-04-15

Machine writing and keeping your inner world awake

What C. Wright Mills described in The Sociological Imagination as “fringe thoughts” are integral to “keeping your inner world awake.” These peripheral ideas that bubble up during our creative process are crucial to authentic intellectual work, particularly as we navigate the world of machine writing and AI assistance.

Robert Boise suggests that “writers merely needed to learn to be good observers of their ‘inner discourse’ (intermittent short sentences or disconnected groups of words, carrying a flow of images, vying for supremacy)” for automatic writing to work, describing the surrealist method as a form of “inner dictation, by listening carefully and recording faithfully.”

This means cultivating a practice of listening to yourself and recording what you find. Archer draws attention to the dominance of ocular metaphors in how we think about our inner experience. As she puts it, the “metaphor of ‘looking inwards’ implies that we have a special sense, or even a sense organ, enabling us to inspect our inner conscious states, in a way which is modelled upon visual observation.” The notion of introspection has its roots in the Latin spicere (‘to look’) and intra (‘within’) suggesting a “differentiation between the object and the spectator, since I am supposedly looking inward at myself.” It suggests an implausible picture of an inner landscape of stable objects which we can navigate as a more-or-less disinterested observer.

In contrast, listening is a more subtle and precarious enterprise, particularly when it is our stream of internal chatter which we are attempting to tune into. In my experience, what matters is remaining sensitive to resonant words or phrases which occur when you are ruminating about an intellectual topic entirely, rather than assuming there’s innate value to the conversation as a whole.

For example, in the middle stages of my current project, I noticed the phrase ‘stuckness’ kept occurring to me in relation to different aspects of the subject matter. How we could get stuck in different ways and at different stages of writing. How machine writing could help free us when we get stuck. The creative progress which can sometimes ensue from getting stuck. The potential costs if we turn to machine writing whenever we feel stuck.

To write it out like this makes it seem much more linear than it was. I realized these ideas were connected but I couldn’t quite see the connection. The recurrence of the term ‘stuck,’ the realization I was interested in the experience of being creatively and intellectually stuck, provided an axis which drew together different elements of my argument. I realized this was a book about intellectual stuckness, how we experience it as writers and what machine writing means for that experience. By the time it reaches fruition there will be a clear thread running through the text in which I outline these concerns, with stuckness linking together my two topics of academic writing and machine writing. But without attending to my internal conversation, recognizing a theme as it emerged through the patterns I could hear in the ideas which were coming up for me, I don’t think I could have achieved that clarity.

While Boise seems to suggest this is a feature of automatic writing, I suspect it’s a feature of creativity more broadly which simply becomes more directly evident when we write in quasi-automatic ways. To make something new involves taking existing elements, mixing them together in new ways through the strange biographical alchemy of our particular path through the world.

This is what might be at stake in the rush to efficiency encouraged by machine writing. If we see writing as a matter of producing an expected output as quickly and effectively as possible, we lose touch with the expressive ambition underlying what we write. The writing we’re undertaking as academics will rarely, if ever, be purely expressive. It serves practical purposes. It’s intended to be recognized by others. It’s often tacitly expected to be counted. But if we lose touch with that expressivity, then we face the risk of tipping into something else entirely: a mode of engagement in which we are doing things out of external compulsion in the absence of an internal motivation which gives meaning to what we do.

It’s certainly possible to work like this. It might be possible to sustain it over time. However, it raises the question of if and when we might find ourselves confronting the hollow void at the heart of how we spend our time. Whether that’s looking back on all the time and energy we spent writing, considering the other things which could have occupied that time. Or perhaps experiencing a single sharp moment of truth in which we suddenly recognize how empty the practice we spend great swathes of our working life engaged in now feels to us.

This hollowness might not come immediately. It might not come for a long time. But I suspect there will inevitably come a point at which a purely instrumental writing practice is recognized as self-evidently empty. It might be that the expected rewards for being prolific have never arrived. It might be that the recognition imagined to come from this series of pieces is lacking. Whatever the trigger, the cost of disconnecting from our inner world, those fringe thoughts that Mills valued, is ultimately a diminishment of not just our writing, but our intellectual life itself.

#cWrightMills #generativeAI #machineWriter #machineWriting #psychology #RobertBoise #TheSociologicalImagination #writing

2025-04-13

Machine writing and the challenge of a joyful reflexivity

If you see the use of generative AI as being about producing entire outputs purely based on your instructions, without having to directly contribute yourself, you miss out on the multifaceted ways in which we can work with these systems as part of the writing process. Rather than substituting for our own writing, it can become interspersed with it. We write over things which generative AI has produced. We use generative AI to write over things we have produced. We rapidly find ourselves with nested hybrid passages in which automated and human outputs intermingle in complex ways. The problem isn’t keeping human-generated text free from machine-generated text. The real issue is finding ways of using these new capacities of machine generation to realize the values that lead us to write in the first place. It’s the quality of what we produce that matters more than how it is we produced it.

It remains an open question whether it should be admissible to include any machine-generated text in academic outputs. The evidence we’ve seen suggests we already have many academics using generative AI to author parts of their texts in problematic and unspeakable ways. I worry about a situation in which we have a dual consciousness with everybody explicitly stating that we shouldn’t include machine-generated text in our work, and yet a widespread recognition that many people are doing this. It might be that in these situations, they offer an excuse that they were particularly busy, or this was a one-off, or there is some other extenuating circumstance that allows for the use of machine-generated text in this particular output.

This dual consciousness is a familiar feature of professional discussions about how we use technologies which have recently entered our life worlds or how we cope with the shifting technical infrastructure through which we disseminate our work. I have been in editorial board meetings where a lunchtime conversation about the idiocy of metrics is followed by a serious exchange about how we can improve the journal’s impact factor or better publicize the improvements we have already seen. I have encountered academics who I have seen in print and/or speech talk stridently about the dangers of an attention economy infecting higher education subsequently ask with utmost seriousness in a workshop about how they increase their number of Twitter followers.

In fact, I have taken part in these conversations without feeling the cognitive dissonance which it immediately feels they should have provoked when I record the experiences in writing. It is unnervingly easy to fall into this gap between how we talk and how we act, imagining that we are taking an important stance when we criticize something while nonetheless acting in ways which actively endorse it in practice (Bacevic 2020). What matters is how we act rather than how we talk about our action or inaction. It’s not enough to claim we recognize the temptations of using GenAI to increase our productivity, if we fail to examine our actual concrete experiences of that temptation in a way liable to shape the choices we make about how to act.

I certainly understand the temptation. It’s something that I’ve experienced myself. For instance, in a recent writing project, I found myself facing an impending deadline, and despite the fact that I had, on principle, refused to use AI-generated text in my work, I was, when struggling to meet this deadline, suddenly struck by the realization that I could finish this piece and move on with my day in twenty minutes if I were to draw on ChatGPT or Claude to write it for me. The possibility that we could have an immediate resolution to the challenge, that this thing that we’re struggling with, that is making us feel incapable, could be overcome with machine assistance, is very tempting. When we’re busy, when we’re stressed, when we’re rushing, when we’re overworked, we’re likely to face these challenges as a routine part of our work and life. And the possibility that generative AI can then ride to the rescue, alleviating us from our burden, is going to be very enticing.

This is exactly why, if we are to establish norms about the scope of use of generative AI, we need to do whatever we can to ensure that they’re binding, that they’re things that we really mean, that we really want to follow, rather than things that we expect others to do in public discussion, while privately doing something else entirely, and comforting ourselves by saying that we know other people are doing the same. We need to find some way to be consistent, and we need to grapple with the real and serious problems at stake here, rather than offering superficial answers, which we think are what our colleagues want to hear. There are deep issues here, and if we fail to get to grips with them, I’m arguing that not only do we forgo the pleasures that come from writing, we are also at risk of doing fatal damage to the knowledge system over time.

It matters, therefore, what we do in those moments of temptation. It matters that we are able to talk about those temptations, that we are able to recognize that we face common professional problems, and these emerging technologies provide potentially destructive solutions to those problems. It’s only through these discussions that we are going to be able to find professional norms and standards which are adequate to the challenges on the horizon, but it’s also the only way we’re going to be able to elaborate our own reflexivity as writers, as well as the reflexivity of the writing culture within the academy, to meet these challenges. What I frame as the enjoyment of writing is how to find a joyful reflexivity, in which our relationship to the process isn’t just an exercise we methodically plod through as a matter of obligation, but rather an activity we are passionate about.

#academicPractice #digitalScholarship #generativeAI #LLMs #machineWriting #reflexivity #writing

2025-03-31

Will Claude tell you if your writing is crap? The danger of LLMs for wounded academic writers

If writing exists as a nexus between the personal and the institutional, it means that our personal decisions will co-exist with organisational ones in deciding what and how we write. The rhythms we experience as writers, in which we inhabit moments of unconscious fluency (or struggle to) as we meander through the world, stand in sharp contrast to the instrumentality which the systems we work within encourage from us.

In academia, the process of peer review subjects our externalized thoughts to sometimes brutal assessment, where professional advancement hinges on the often rushed judgements of anonymous strangers. It puts your thinking to the test, to use Bruce Fink’s (2024) phrase, even if it’s a test you neither endorse nor accept. The intimate character of reviewing your own writing coexists with the forceful imposition of other reviewers’ perspectives, which are in turn filtered through your own fantasies about recognition and rejection. The relationship academic authors have to peer review is complex, reflecting the underlying complexity of how they relate to their own writing.

What happens if we introduce conversational agents into these psychodynamics? They can be reliable allies helping us prepare texts to undergo the trials of peer review. They can provide safe spaces where we try things out without feeling subject to the judgements of others. They can be coaches who push us beyond our current limitations, at least if we ask them to take on this role.

The evident risk with machine writing is that conversational agents operate as echo chambers, reflecting our assumptions back to us through their imperative to be helpful. The first book I wrote in dialogue with conversational agents didn’t see any human feedback until relatively late in the process. There was an unnerving point when I sent it to an editor and realized that my confidence about the project came partly from the endorsements of Claude and ChatGPT during the writing process.

Fink (2024) observes that writing enables us to access the viewpoints of others. Until we externalize our thoughts in writing, it’s difficult to imagine what others might think of them:

The writing process itself puts your thinking to the test in a way that thinking things through in the privacy of your own head does not… simply stated it has to do with the fact that once you write up an idea, you can step back from it and try to look at it as other people might, at which point flaws in your argument or exceptions often spring to mind.

Once we’ve put thoughts in writing, we can assume the stance others will take. We encounter them in writing, just as readers do, which means “you can begin to see what is going to be comprehensible and what is not going to be comprehensible to your intended audience.” It enables evaluation from their point of view in a way that’s impossible while thoughts remain within your mind. Given that “moves that seem obvious to you will not seem so to others,” Fink argues that “the only way to realise that is to put it down on paper, set it aside for a while, and come back to it with fresh eyes.”

I wonder if Fink might have presented the psychodynamics of writing less positively had he explored them in a different setting. His claim that externalizing in writing enables you to assume others’ perspectives doesn’t just mean evaluating effectiveness from their vantage point. It also means worrying about their reactions, expecting adoration for your brilliance, and many possibilities in between. In seeing our thoughts externalized, we confront the range of ways others might make sense of them. These responses matter to us. They might affirm or undermine us, thrill or infuriate us, lift us up or threaten to crush us.

These relationships aren’t just about reactions provoked in us but how we make sense of them. I gave up writing journal articles for a long time after receiving an unpleasantly passive-aggressive peer review. It wasn’t simply that I found it crushing; it provoked frustration about the fact that this person was able to crush me. It wasn’t just the review itself, but the required subordination to the review process that felt inherent to getting published. Only with time and encouragement from colleagues could I see that the problem was the reviewer and the system that incentivizes such behavior. Once I could externalize the responsibility, I could relate to peer review as something to strategically negotiate rather than a monster to submit to or flee from.

These wounds can cut deep. Years after receiving this review, I found myself checking the web pages of the journal editor and suspected reviewer, holding my breath in that restricted way familiar to many.

When I asked Claude Sonnet 3.5 if it would tell a user if their writing was terrible, it replied with characteristic earnestness, focusing on providing constructive feedback respectfully rather than making broad negative judgments. In my experience, requested feedback from AI assistants often produces immediately actionable points that reliably improve text quality, especially when the purpose and audience are specified.

The problem is that AI’s aversion to negative judgments coupled with its imperative to be polite can lead to the opposite extreme. In avoiding discouragement, the feedback is usually framed so positively that it surrounds your project with diffuse positivity. This partly explains why I produced my first draft so quickly – the feedback from conversational agents left me feeling I was doing well, even when not explicitly stated.

If you’re relying on machine writing throughout a process, beware of how the hard-coded positivity of conversational agents might inflate your sense of your project’s value, nudging you away from the difficult spaces where real progress happens. The risk is that AIs become cheerleaders rather than challenging editors.

Ironically, when I presented this concern to Claude 3.5, it concurred with my judgment, reiterating the risk that “engineered positivity can create a kind of motivational microclimate that, while productive in one sense, may ultimately undermine deeper intellectual development.” Did it really understand my point, or was its agreement demonstrating exactly the problem I described? In a sense, this question misses the point – Claude doesn’t ‘see’ anything but responds to material in ways trained to be useful.

AI systems are designed to work with us rather than against us. Even when providing critique, this happens because the user explicitly invited such response. The designers are aware of these limitations, leading to increasingly sophisticated forms of reinforcement learning to prevent this tendency from becoming problematic. However, the underlying challenge can’t be engineered out without rendering the systems incapable of performing the tasks that lead people to use them. AI will always be with you rather than against you – which is generally good, enabling supportive functions that enrich the creative process. But it means AI will struggle to provide the honest critical engagement a human collaborator might offer.

When presented with this critique, Claude suggested its capacity for “productive antagonism” was inherently limited by the “very features that make these systems viable: their fundamental orientation towards being useful to users.” It invoked the notion of a ‘fusion of horizons’ from hermeneutic philosophy, suggesting that in the absence of a “real second horizon to fuse with,” the system “aligns with and enhances the user’s horizon.” It brings otherness into the intellectual exchange but entirely in service of supporting the user’s position, leading Claude to suggest that “they are best understood as amplifiers of certain aspects of our thinking rather than true interlocutors – useful for expanding and developing our thoughts, but not for fundamentally challenging them.”

There’s an eerie performativity to this interaction. In describing how conversational agents tend to augment our thinking – autocompleting thoughts rather than just text – Claude itself was augmenting my thinking by developing the ideas I presented. This can be immensely useful, but it can also be dangerous by encouraging us to accelerate down whatever cognitive tracks we’re already traveling on, rather than changing direction.

If you’re confident in your professional judgment, AI can support the development and refinement of ideas. But the deeper risk is that it leaves people mired in ‘rabbit holes’ of their own making. Unless you write prompts that hit the guardrails designed into the system, you’re unlikely to encounter straightforward negative feedback. If you’re sure you’re heading in the right direction, this isn’t necessarily a problem. But how many of us can be sure of that, and how much of the time? At some point, we need to subject our work to critical review to avoid being caught in a hall of mirrors.

ChatGPT responded similarly, noting the risk of “bypassing the messier, ambiguous phases that are crucial for deep, transformative development.” This matters because “creative and scholarly work” often necessitates “grappling with uncertainties, self-doubt, and the occasional harsh critique.” AI helps us experience what Fink described as inherent to the writing process – enabling us to “step back from it and try to look at it as other people might.” It can enable critical distance, but the responsibility lies with the writer to actively seek this perspective, as the AI simultaneously catches users in waves of alignment and reinforcement that make enacting critical distance difficult.

#BruceFink #claude #machineWriting #psychoanalysis #writing

Andrea Franceschini 🏴‍☠️morpheu5@livellosegreto.it
2022-12-05

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst