Arvind Narayanan

I'm a computer science professor at Princeton. I write about AI hype & harms, tech platforms, algorithmic bias, and the surveillance economy.

I've been studying decentralized social media since the late 2000s, so I'm excited to use and write about Mastodon at the same time.

Check out this symposium on algorithmic amplification that I'm co-organizing: knightcolumbia.org/events/opti

Substack: AI Snake Oil
Book: Fairness and machine learning
Arvind Narayanan boosted:
2024-11-19

This piece about the UK liver transplant matching algorithm, by @randomwalker and @sayashk, is well worth a read. Excellent example that algorithms that make real-world decisions aren't inherently problematic, but they can certainly be horrifyingly bad if not designed (and tested, and transparently published, and audited) carefully.
aisnakeoil.com/p/does-the-uks-

Arvind Narayanan boosted:
2024-09-26

interesting new paper on the feasibility of using LLM agents for computational reproducibility checks by Narayanan and colleagues @randomwalker

arxiv.org/abs/2409.11363

Arvind Narayananrandomwalker
2024-05-14

Kirkus Reviews, which provides early book reviews to the publishing industry, has given AI Snake Oil a very positive "starred" review, which we're told is rare and kind of a big deal. Honored and grateful! kirkusreviews.com/book-reviews
Preorder:
amazon.com/Snake-Oil-Artificia
bookshop.org/p/books/ai-snake-
More preorder links at the bottom of this post
aisnakeoil.com/p/ai-snake-oil-
Coauthored by @sayashk, published by @princetonupress.

Arvind Narayanan boosted:
Sayash Kapoorsayashk
2024-04-11

I'm ecstatic to share that preorders are now open for the AI Snake Oil book! The book will be released on September 24, 2024.

@randomwalker and I have been working on this for the past two years, and we can't wait to share it with the world.

Preorder: princeton.press/gpl5al2h

AI snake oil book cover
Arvind Narayanan boosted:
2024-01-25

‘Will AI transform law? The hype is not supported by current evidence’, write @randomwalker & @sayashk aisnakeoil.com/p/will-ai-trans They also published a scholarly paper on the topic with Peter Henderson: cs.princeton.edu/~sayashk/pape #law #ai #tech #chatgpt

Arvind Narayanan boosted:
Knight First Amendment Inst.knightcolumbia@mastodon.online
2023-10-27

Most online speech is hosted on algorithmic platforms designed to optimize for engagement. But algorithms are not neutral. Read other essays in our "Algorithmic Amplification & Society" project series, in collaboration with @randomwalker. Learn more here:
knightcolumbia.org/research/al

Arvind Narayanan boosted:
Knight First Amendment Inst.knightcolumbia@mastodon.online
2023-08-24

Excited to share that we’ve started publishing the essays from “Optimizing for What? Algorithmic Amplification and Society,” our spring symposium organized with @randomwalker. Here’s a brief intro. by
@kgb. Links to the first two essays follow.
knightcolumbia.org/blog/explor

Arvind Narayananrandomwalker
2023-08-21

The "ChatGPT has a liberal bias" paper has at least 4 *independently* fatal flaws:
– Tested an older model, not ChatGPT.
– Used a trick prompt to bypass the fact that it actually refuses to opine on political q's.
– Order effect: flipping q's in the prompt changes bias from Democratic to Republican.
– The prompt is very long and seems to make the model simply forget what it's supposed to do.
By @sayashk and me, summarizing our analysis and a separate one by Colin Fraser. aisnakeoil.com/p/does-chatgpt-

Arvind Narayananrandomwalker
2023-08-07

@hjonker Yes, the project ended a few years ago. The website downtime is unintentional though; I plan to redirect it to an archive.org version. Sorry about that.

Arvind Narayananrandomwalker
2023-07-10

@tuliotec With modern mobile OSes, surreptitious eye tracking is not technically possible. It's also a legal risk that IMO outweighs the benefits.

But face analysis has been used for recommendations twitter.com/MarcFaddoul/status

Other biometrics like gait and activity recognition are also used, but not for recommendations AFAIK.

Use of location data is ubiquitous, of course.

Hope that helps!

Arvind Narayanan boosted:
2023-07-06

The amount of misinformation on Mastodon around Threads and the EU is a great demonstration of how motivated reasoning is not a problem only for commercial social media platforms.

Arvind Narayanan boosted:
Mike Masnick ✅mmasnick
2023-04-29

It's been six months since Elon took over Twitter. I have some thoughts on the "Twitter diaspora" and the current decentralized alternatives: techdirt.com/2023/04/28/six-mo

Arvind Narayanan boosted:
2023-04-26

Ambulances can’t reach patients before they die. Fire trucks can’t get through and house fires blaze. Pedestrians trying to cut through trains have been disfigured, dismembered and killed; a Pennsylvania teenager lost her leg hopping between rail cars as she rushed home to get ready for prom.

In Hammond, the hulking trains of Norfolk Southern regularly force parents, kids and caretakers into an exhausting gamble: How much should they risk to get to school?

propublica.org/article/trains-

Arvind Narayananrandomwalker
2023-04-26

This is the latest in the AI Snake Oil book blog by @sayashk and me. Writing this blog alongside the book has been really fun. I'll probably do something like this for all future books! Thank you to everyone who subscribed. aisnakeoil.substack.com/

Arvind Narayananrandomwalker
2023-04-26

@RWerpachowski@mastodon.green It's a great post but we're making a different point and I don't think there's any contradiction. We're not contrasting RL with supervised, but rather any type of post pre-training intervention (sorry, is there a better term for that?) with something that can maybe prevent implicit biases in the first place (e.g. a data intervention).

Arvind Narayananrandomwalker
2023-04-26

OpenAI mitigates ChatGPT’s biases using fine tuning and reinforcement learning. These methods affect only the model’s output, not its implicit biases (the stereotyped correlations that it's learned). Since implicit biases can manifest in countless ways, OpenAI is left playing whack-a-mole, reacting to examples posted on social media.

Arvind Narayananrandomwalker
2023-04-26

People have been posting glaring examples of ChatGPT’s gender bias, like arguing that attorneys can't be pregnant. So @sayashk and I tested ChatGPT on WinoBias, a standard gender bias benchmark. Both GPT-3.5 and GPT-4 are about 3 times as likely to answer incorrectly if the correct answer defies gender stereotypes — despite the benchmark dataset likely being included in the training data. aisnakeoil.substack.com/p/quan

Arvind Narayananrandomwalker
2023-04-26

We're at over 1,000 registrations for the @knightcolumbia algorithmic amplification symposium this Friday & Saturday. We're lucky to have an all-star cast of speakers. In-person registration is closed/waitlisted, but you can still register to attend online: knightcolumbia.org/events/opti

Arvind Narayananrandomwalker
2023-04-14

At @knightcolumbia I'm co-organizing what I think is the first symposium on the topic of algorithmic amplification on social media — April 28/29 (NYC and online). I'm told we have over 600 registrations already. We'll be moving to a waitlist for in-person participation soon. Register to hear from ~30 leading thinkers on the topic: knightcolumbia.org/events/opti

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst