Machine Writing and the Pleasure of Composition
Until recently, Peter Elbow’s observation that one cannot write prolifically without finding some pleasure in the process captured a profound truth of the writing experience. If you hated writing it would be difficult to write a lot, for the simple reason that it would be impossibly time consuming to produce a significant quantity of words unless you could in some sense relax into the process. If you truly hated what you were doing, writing would be a sufficiently arduous activity as to render sustained engagement in it nearly impossible. This is not to deny that academic writers sometimes hate writing, let alone that they might often struggle with it. The ubiquity of texts which intend to support academics through the writing process speak to this obvious need. But if this is a persistent condition they are unlikely, to use Elbow’s term, to “write a lot” and will be prone to forms of blocking which render any writing at all impossible for periods of time.
What has changed is how LLMs can intervene in this process through the affordances of machine writing. I provided Claude 3.5 with these two paragraphs and asked it to complete this paragraph for me, starting from the previous sentence. It’s fair to say that it perfectly captured what I was trying to say, offering me a short paragraph which perfectly mimicked what I had intended to say before I had actually said it:
“The relationship between enjoyment and output which Elbow identified has been fundamentally disrupted by these systems. It’s now possible to write prolifically while hating every moment of it, outsourcing the arduous labour of composition to machines while retaining editorial control over the final product. This technological capability raises profound questions about the future of academic writing and the role that pleasure and satisfaction will play within it. If we separate writing from enjoyment without sacrificing productivity, what becomes of Elbow’s insight about the necessity of pleasure for sustained creative work? These are the questions which this book seeks to address.”
Indeed. To what extent am I addressing these questions? In this case I’ve provided Claude with a substantial extract of my own writing, supplemented by a large quantity of draft writing and notes provided as background knowledge which ensures its capacity to enter into my project, argument and style. Is it merely inferring what I might say from what I’ve already said? Is this simply autocomplete expanding from sentences through to paragraphs? Or is the LLM an active participant in the ideational process? Is it to some extent co-authoring the text with me? What does this mean for how I relate to my own writing? These are profound questions which touch upon fundamental issues of authoriality and creativity which long precede the introduction of LLMs into academic writing (Hall 2016, Braidotti 2019). But they are ones which will soon become routine features of the academic writing experience, to the extent academics draw upon the affordances of LLMs in their writing process.
To use machine writing effectively is far more challenging than simply throwing out instructions to the LLM to produce a paragraph on a particular topic. This is what Claude 3.5 produces when I ask it to write two sentences about the difficulties of academic writing, as a single sentence instruction with no other detail provided, in a separate conversation which didn’t have access to the background knowledge I had provided in this project:
“Academic writing demands a delicate balance between conveying complex ideas with precision while maintaining clarity for readers who may not be experts in the field. Scholars often struggle to strike this balance, finding themselves caught between the need to demonstrate sophisticated understanding through field-specific terminology and the equally important goal of making their work accessible to a broader audience.”
Even if the syntax and grammar are beyond reproach, this is a strikingly generic statement about academic writing. It implies that all academic writing involves a tension between writing for a specialised audience and a generalised audience, which is obviously untrue as an empirical generalisation. In contrast if I make the same request with my aforementioned body of draft text and notes on hand, the response is only slightly less generic:
“Academic writing often feels like a high-stakes balancing act between demonstrating scholarly rigor and maintaining an engaging narrative that will hold readers’ attention. Beyond the intellectual challenges of developing and articulating complex ideas, academics must also navigate institutional pressures, disciplinary conventions, and the constant worry that their work won’t meet the field’s ever-shifting standards of what constitutes a meaningful contribution.”
Once offered a far more substantial prompt, the results improve immeasurably. Rather than simply asking Claude to “write me two sentence about the difficulties of academic writing” I provide it with a far more detailed description of what I was asking for, including information about the context in which I was making the request: “In the opening chapter of my academic book about AI and academic writing, I’m trying to illustrate the capacity of LLMs to produce writing which matches the intentions of the academic author, as well as how this might alleviate or complicate the challenges of academic writing. Please write me two sentences which speak to this point, in a manner which best illustrates the full extent of your writing capabilities.” In response the model produced something which while unusable in some respects, as it inferred from the request to impress the reader that I wanted vastly overcomplicated sentences, certainly demonstrated a capacity to write in a way appropriate to the project:
“While large language models can effortlessly generate fluent academic prose that adheres to disciplinary conventions and stylistic norms, their capacity to do so potentially transforms the relationship between academics and their writing – offering both liberation from certain technical challenges and an unsettling disruption of the intrinsic satisfactions traditionally found in wrestling complex ideas into written form. The ability of these systems to rapidly produce text which matches our intentions while maintaining scholarly standards of argumentation and evidence creates a strange new dynamic in which the arduous yet rewarding process of academic composition threatens to be displaced by a frictionless form of intellectual outsourcing, leaving us to question what role human authorship and creative struggle should play in scholarly writing.”
It presented these with an explanation that it had tried to “demonstrate sophisticated academic writing while directly engaging with the intellectual and experiential tensions at the heart of your project.” This illustrates how it was already working with an understanding of the project, based on the knowledge base I had provided it with. Even with this significant amount of context, it would revert to a fairly generic output if I provided it with a simple and nondescript prompt.
As a general principle I’d suggest that any one sentence prompt is unlikely to be effective for academics who are trying to make an intellectually sophisticated use of a model. Even though frontier models are advancing in their capacity to infer from a succinct user request what they might be looking for, it’s still inferring an understanding of what it is you’re looking for on the basis of insufficient context. The only way to provide that context is ultimately to write, which complicates the instinct to draw upon models as a means to avoid or accelerate academic writing.
This creates a fascinating paradox at the heart of AI-assisted academic writing: to effectively use tools that promise to reduce our writing burden, we must first engage in substantial writing ourselves. Those who approach AI writing tools hoping to bypass the difficult work of articulation may find themselves producing generic, shallow content.