The second digital divide which LLMs are opening up
This piece from Anthropic co-founder Jack Clark captures my mounting concern about the second digital divide which LLMs are opening up i.e. the skills and capacities to use these systems effectively rather than the simple fact of access to them:
Now, getting AI systems to do useful stuff for you is as simple as asking for it – and you don’t even need to be that precise. Often, I find myself prompting Claude like I’d prompt an incredibly high-context, patient, impossible-to-offend colleague – in other words, I’m blunt, short, and speak in a lot of shorthand. And Claude responds to my asks basically perfectly.
You might think this is a good thing. Certainly, it’s very useful. But beneath all of this I have a sense of lurking horror – AI systems have got so useful that the thing that will set humans apart from one another is not specific hard-won skills for utilizing AI systems, but rather just having a high level of curiosity and agency.
In other words, in the era where these AI systems are true ‘everything machines’, people will out-compete one another by being increasingly bold and agentic (pun intended!) in how they use these systems, rather than in developing specific technical skills to interface with the systems.
We should all intuitively understand that none of this will be fair. Curiosity and the mindset of being curious and trying a lot of stuff is neither evenly distributed or generally nurtured. Therefore, I’m coming around to the idea that one of the greatest risks lying ahead of us will be the social disruptions that arrive when the new winners of the AI revolution are made – and the winners will be those people who have exercised a whole bunch of curiosity with the AI systems available to them.
https://importai.substack.com/p/import-ai-397-deepseek-means-ai-proliferation
What he fails to grasp here is the role of cultural capital alongside this “high level of curiosity and agency”, as well as the working conditions which make its exercise possible. I’ve spent the last 20 years as a blogger learning to write in a quasi-automatic way which means I can pour out thousands of spontaneous words a day without even feeling like I’m making an effort. It’s not only the quantity of what I write, but the quality of it as well – not in the sense that it’s good (most of it is stream of consciousness) but in the manner in which I express inchoate thoughts through a highly technical vocabulary that crosses multiple domains. Through actual training (philosophy, sociology), professional experience (education), reading (media/comms) and hubris (STS, political economy) I can cos-play across disciplines so naturally that I rarely notice myself doing it, at least on the blog. The combination of these two traits, the capacity to write lots near effortlessly and to mix and match specialised vocabularies while doing so, gives me a tremendous advantage in prompting contemporary models. This complicates Clark’s judgement here:
I talk to Claude every day. Increasingly, I find my ability to benefit from Claude is mostly limited by my own imagination rather than specific technical skills (Claude will write that code, if asked), familiarity with things that touch on what I need to do (Claude will explain those to me). The only hard limit is me – I need to ‘want’ something and be willing to be curious in seeing how much the AI can help me in doing that.
Today, everyone on the planet with an internet connection can freely converse with an incredibly knowledgable, patient teacher who will help them in anything they can articulate and – where the ask is digital – will even produce the code to help them do even more complicated things. Ensuring we increase the number of people on the planet who are able to take advantage of this bounty feels like a supremely important thing. If we get this right, everyone will be able to achieve more and exercise more of their own agency over their own intellectual world. If we get it wrong, we’re going to be dealing with inequality on steroids – a small caste of people will be getting a vast amount done, aided by ghostly superintelligences that work on their behalf, while a larger set of people watch the success of others and ask ‘why not me?’.
https://importai.substack.com/p/import-ai-397-deepseek-means-ai-proliferationCasCa
The point Casey Newton makes here about DeepSeek exposing chain of thought as a design decision is relevant as well. To the extent the model is explaining its ‘reasoning’ (what it thinks you want, what it will do in a response) in a way intended to support the user in maximising the effectiveness of their use, the more reflexivity in the user will be rewarded with a greater capacity to get functionality out of the model.
#anthropic #claude #culturalCapital #JackClark #prompting