Tarleton Gillespie

I'm an independent-minded academic, critical of the tech industry, working for Microsoft. Perplexing. My latest book is Custodians of the Internet (Yale, 2018)

Tarleton Gillespietarleton
2023-03-24

Don't look now!! It's the next wave of SMC interns at MSR, studying all things sociotechnical! socialmediacollective.org/2023

Tarleton Gillespie boosted:
NancyGPTnancybaym
2023-03-24

Our postdoc candidates were truly extraordinary. We are grateful to all who applied. I can only echo @ZoeGlatt and @chchliu that the market this year is awful and if you haven’t landed your dream spot, it is not your fault.

Tarleton Gillespie boosted:
Chuncheng Liuchchliu
2023-03-24

I am delighted to share that I will be joining Microsoft Research as a sociotechnical systems postdoc researcher. Beginning next fall, I will be starting as an Assistant Professor of Communication Studies and Sociology at Northeastern University. Excited about my new journey in Boston! I am grateful to have had the support of my committee, friends, and even strangers along the way. I will make sure to pass on this kindness.

Tarleton Gillespietarleton
2023-03-20

If this is something you'd like to read, please do. "The Fact of Content Moderation; Or, Let’s Not Solve the Platforms’ Problems for Them" Media and Communication, forthcoming. cogitatiopress.com/mediaandcom

Tarleton Gillespietarleton
2023-03-08

@natematias @rabble sorry to hear about that nonsense! If you didn’t click that buy button yet, don’t do it! here’s the free PDF: bit.ly/CustodiansOfTheInternet . My sense is that the small academic publishers simply can’t afford to deal with global distribution, in the face of the Amazonian giants.

Tarleton Gillespie boosted:
NancyGPTnancybaym
2023-02-24

We are hiring a predoc to work with @tarleton @maryLgray @zephoria and me in Cambridge MA, starting in July.

EDIT: SEARCH IS CLOSED

socialmediacollective.org/2023

Tarleton Gillespietarleton
2023-02-22

@cyberlyra I guess I want a policy that (a) allows that a distinction btw hosting and boosting is actually hard to parse, (b) can distinguish between moderating imperfectly vs profoundly looking the other way while still benefitting [we could lean on the "good faith" part of 230 more] and (c) imagines some obligations for platforms that are more about aggregate harm+value - i.e. what do we do when the platform is doing what it should, and its still has deleterious effects?

Tarleton Gillespietarleton
2023-02-22

@joshuafoust Totally. I didn't notice whether filtering for terrorist content came up in the Gonzalez discussion -- my understanding was that the complaint could not hinge on removal, because the 230 case law is pretty settled, so they had to focus on recommendation. which to me is implicitly a case where filtering was not used, or was not successful. But I didn't listen to every bit of the back and forth yesterday.

Tarleton Gillespietarleton
2023-02-22

@joshuafoust Certainly already exists, yes. I guess I'd want to distinguish btw detection algorithms and recommendation algorithms. The Gonzales case is objecting to YouTube taking it upon itself to suggest videos, which may be ISIS videos, which may be harmful. Being more like a publisher than ever. So its whether recommendation puts YouTube beyond what 230 indemnifies. Detection algos seem different, part of the mechanisms platforms can use as part of good faith content moderation.

Tarleton Gillespietarleton
2023-02-22

@danfaltesek I like how you are thinking of them as a gradient, and I agree that we can decide where to draw a line, past which there should be some responsibility for the provider when content is harmful. I guess I don't want it to be in the Court's read of 230, because I'd prefer a different conversation altogether, about aggregate harms to the public rather than individual harms to a user. Feels to me like that can't be framed as a 230 update.

Tarleton Gillespietarleton
2023-02-22

Platform providers have asked us to accept a little error, as the cost of getting what we want, while they capitalize on our data and our attention to ads. This may not be a bargain we should have accepted, and it’s one we can reject it if we want. Or, we could use it to justify new obligations for these platforms: new expectations, public standards, and incentives for innovations in recommendation and moderation that improve the quality of public discourse. [22/22]

Tarleton Gillespietarleton
2023-02-22

If content moderation is imperfect, then what gets recommended will also occasionally include the reprehensible, the harmful, or the illegal. Even if they were applied consistently, we do not agree on the standards; and people are ingenious when it comes to testing and eluding these governance mechanisms. [21/22]

Tarleton Gillespietarleton
2023-02-22

The part that’s new, perhaps, is that we also have to figure out our societal tolerance for error. Content moderation, even when performed in good faith, can never be perfectly executed. At this scale, even sophisticated detection software removes some content it shouldn’t, and overlooks some that it should remove; we ask too many people to do too difficult a job with too little support, and as such the standards will invariably be applied inconsistently. [20/22]

Tarleton Gillespietarleton
2023-02-22

This is something we never solved with traditional media, but our efforts involved setting specific obligations about education, about children, about balance, about incentives towards quality programming, etc. This may sound antiquated, but it is a problem we have always faced, and we face again with social media.  [19/22]

Tarleton Gillespietarleton
2023-02-22

Neither of these outcomes are particularly helpful if what we’re actually trying to address is the aggregate harms of information that we’re not willing to simply prohibit. Instead of hoping to do so by extending or curtailing 230, we need to look back to a well-worn, century-long discussion: how to get a media ecosystem, largely or entirely driven by market imperatives, to also serve the public interest? [18/22]

Tarleton Gillespietarleton
2023-02-22

If Congress makes it so that Section 230 no longer protects recommendation, platforms are very likely to remove way more content altogether, as well as more drastically reducing what they’re willing recommend - which they do already. Not recommending content that is otherwise there to be seen is exactly what conservatives rail against, mistakenly called “shadowbanning” - but it is the only logical response from platforms if the Court finds for the plaintiffs in this case. [17/22]

Tarleton Gillespietarleton
2023-02-22

But moderation and recommendation aren’t separate, they work in tandem: platforms avoid recommending content mostly by removing what they deem reprehensible or dangerous. The ISIS videos that the plaintiffs in <i>Gonzalez v. Google</i> object to would not have been recommended if they’d been more diligently removed. So, that the more we demand that content remain online because of its speech rights, the more often it will be recommended, even by an algorithm designed in good faith. [16/22]

Tarleton Gillespietarleton
2023-02-22

These questions are, as we know, politically fraught. Oddly, the politics around recommendation and the politics around content moderation are sometimes at cross purposes. The U.S. political right has pushed back against platform efforts to expand content moderation, suggesting that they are politically biased. Simultaneously, they also want platforms to be more responsible for what they recommend (or at least need to say that as justification to roll back Section 230 protections). [15/22]

Tarleton Gillespietarleton
2023-02-22

It’s been long enough to see that the market has not driven these platforms towards a more rewarding or verdant mix of information -- quite the opposite. So it’s reasonable to suggest that these algorithmic criteria and their effects should be more open to public and regulatory scrutiny, so that we can ask help ensure that the public gets what it needs while also allowing platforms to profit based on giving what it predicts users want. [14/22]

Tarleton Gillespietarleton
2023-02-22

Today, the question is, what to do with massive platforms whose designs, including the recommendation algorithms, do have cumulative effects? The criteria built into these algorithms, even if they aren’t nefarious, are consequential. The harms - not so much the harms of one person seeing one video, but the aggregate harms of some kinds of content getting more public visibility and others less, have implications for the democratic process and an informed citizenry. [13/22]

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst