Why Does AI Art Look Like That?
I've had a bunch of conversations with people who didn't seem to know why AI art looks the way it does, so I wrote about it: https://samgoree.github.io/2025/07/11/Why_does_ai_art_look_like_that.html
Currently into alternative forms of evaluation in computer vision and NLP. Assistant professor of computer science at Stonehill College. Former data scientist, former NSF graduate research fellow, former music major.
Why Does AI Art Look Like That?
I've had a bunch of conversations with people who didn't seem to know why AI art looks the way it does, so I wrote about it: https://samgoree.github.io/2025/07/11/Why_does_ai_art_look_like_that.html
The thing that keeps coming up as I talk to people about AI in their workplaces is how *dehumanizing* it is. It’s dehumanizing to ask a machine to do something, and then have to correct it over and over; it’s dehumanizing to be told to read something that involved little to no human effort to make.
As someone who works in higher-ed and also has taught middle school and high school aged kids, my opinion is not that LLM use is exploding among students because they're lazy or stupid or anything else
it's because our educational system has prioritized a very transactional "do this bullshit, and you get the credentials you need to have a life" approach for
well
maybe forever, really
and no one should be surprised that adversarial approaches by teachers and administrators are being met with an adversarial approach by students
The course design didn't turn out all that much different from the standard Norvig and Russel AI course, but the historical framing gave me a good answer to the question "why are we learning this?"
Special thanks to Iris Van Rooij, whose article on reclaiming AI for cogsci had a table that gave me the idea for defining AI as a "history of practices reflecting different ideas of AI." https://link.springer.com/content/pdf/10.1007/s42113-024-00217-5.pdf
How do you organize an AI course in 2025? My answer was to center the history of people and the problems they were trying to solve. Post on my blog here: https://samgoree.github.io/2025/05/22/historically_grounded_ai.html
I did a guest lecture in @palvaro's distributed systems class yesterday, and someone asked a question about "data lakes", and let me tell you, I took an unusual amount of pleasure in saying "I don't have the slightest idea what a 'data lake' is."
@eaganj @mcnuttandrew by "technical" do you mean "experienced programmer" "quantitative researcher" or "highly precise/practical designer"?
AI isn’t replacing student writing – but it is reshaping it https://buff.ly/jgb6PS8
Turns out that scientific consensus and public policy matter a lot.
Most #LLMs over-generalized scientific results beyond the original articles
...even when explicitly prompted for accuracy!
The #AI was 5x worse than humans, on average!
Newer models were the worst.🤦♂️
🔓 Accepted in #RoyalSociety Open #Science: https://doi.org/10.48550/arXiv.2504.00025
AI doesn’t need to become self-aware to be dangerous. It just needs to be plugged into HR, healthcare, and credit scoring systems with no appeal process.
@mcc this was lowkey a version of my dissertation at one point
I'm prepping a class for next week about uses of large language models. I've already got materials related to text classification, machine translation and chatbots. I'm particularly interested in uses which treat them as *language models* not omniscient oracles.
What's your favorite use for LLMs?
How Flash games shaped the video game industry (2020)
Link: https://www.flashgamehistory.com/
Discussion: https://news.ycombinator.com/item?id=43225560
@jbigham oh man, these days it seems like most of my students come in with either the prior that LLMs are magic oracles or the prior that all AI is inherently immoral. Do you have any tips for dispelling these kinds of preconceptions?
advice for students --
as much as it is important not to uncritically accept AI hype, claims of superhuman performance, AGI, etc.
it is also important not to uncritically accept that LLMs are useless b/c they are sometimes wrong, that all AI is terrible for the environment, etc.
stay rigorous folks!
@portugeek @Daojoan Something from @pluralistic I'm always quoting:
"Quantitative disciplines – physics, math, and (especially) computer science – make a pretense of objectivity. They make very precise measurements of everything that can be measured precisely, assign deceptively precise measurements to things that can’t be measured precisely, and jettison the rest on the grounds that you can’t do mathematical operations on it."
“If you think technology will solve your problems, you don’t understand technology and you don’t understand your problems”
~ attrib. Laurie Anderson
@lea my go-to is always a nice chocolate bar. Students go nuts for free food and it's a pretty inconsequential prize.
Also, it's pretty hard to argue after this that speedrunning is anything other than the first truly online performing art discipline. There was an extended conversation earlier in the marathon about SpikeVegeta's "useless" theater degree. If anything, GDQ shows how useful a theater degree can be.