Reading the program and talk titles/synopses of some AI design uni symposiums. Many are sounding a little blue-eyed and I wish at each event at least one of the talks would also involve a more holistic, critical Realpolitik dissemination of AI use and address glaring contradictions (e.g. see the recent talk by Goldsmith's @danmcquillan for example), even if it's hard to hear for some...
Designing for more inclusiveness & empathy, incl. for "non-human" participants sounds great & long overdue as a design (and education) focus, but does it really require (or even should require) any AI involvement at all? Does it make any sense at all, using the most centralized, monopolistic, resource intensive, extractive/abusive and environmentally/socially hazardous form of computing to help contemporary Design Practice to become more inclusive/empathetic?
Is it not a total conundrum to even talk about any "post-extraction" aspects in LLM-based AI, if the entire conceptual foundation and actual implementation rests on (often illegal) extraction of all forms of knowledge and physical resources to ensure its continued growth/scale/relevance? Where does the data, the minerals, energy and water come from for the build-out of the intended capacity? At what costs?
Does it make any sense to talk about speculative AI design utopias, but at the same time base all the routes/solutions to get there on funding/using orgs who're main culprits/contributors to the current dire state of global affairs, and who're continuously abusing their position and pushing for more erosion of existing legal frameworks, for more surveillance (to increase data intake and build out monopolies) and dissolving political/environmental regulations/protections to increase their extractive practices?
Does any of this really empower humans (rather than individual people/groups involved in AI proliferation) or does it objectively improve the situation for any other _living_ organisms on this planet? Not talking about AGI threats here — the current set of factors is more than sufficient — isn't increased use of soon hyper-scale AI one of the biggest risks? Is there a talk at any of these events about cost/benefit analyses and also an overview of which parties/groups/demographics stand to cost and benefit?
Will AI help to solve inequality or isn't it (becoming) part of the cause? How will governments respond to massive job losses, resulting loss of consumers/markets, increased chances for social unrest, coupled with increased energy prices, inflation and technological possibilities (and active offers by suppliers) for increased surveillance/enforcement? Is there any institutional research on useful design approaches for helping people in any of these AI-induced situations?
How will AI preserve people's autonomy of personal computing if more and more infrastructure becomes centralized/surveilled/censored and people without the latest hardware become excluded from state-provided services? How can we trust any AI proposed design solutions/approaches with their more-than-shaky epistemological grounds, lack of rigor/provenance, using possibly invisibly hostile/toxic ideas/philosophies and the generally stochastic approach to generating non-reproducible "answers"? What additional design processes are required to make any of this actual practically usable, also in light of legal requirements/certifications in many fields?
If the framing of "more-than-human design" is going to be about "AI empathy", then we're entering another very dangerous territory, even if this all falls under Speculative Design.
In 2025, it's about time to get real and each time it's a missed opportunity to continue treating SD in a vacuum, entirely disconnected to our current timeline, much like it was done a decade or more ago, and much like how architects still keep on dreaming up vapid design utopias, kindly sponsored by some of the most human-rights-abusing governments on the planet...
#AI #LLM #Design #Education