Fenwick McKelvey

(He/him)
Tiohtiá:ke/Montreal

Author of Internet Daemons (U. Minnesota Press), Associate Professor at @concordia University. Co-director of Concordia's Applied AI Institute @AI2 & member of Milieux's Machine Agencies @machineagencies.

Fenwick McKelvey boosted:
Kevin Driscoll 📳kdriscoll@aoir.social
2024-01-12

💾 🤗 Call for Participation for the Critical Code Studies Working Group (CCSWG 24) #critcode #dh

(Deadline extended to January 31!)

Topics for this year include:
☑ Queering Code
☑ CCS and AI
☑ The DHQ Special issues
☑ Teaching Code and Code Studies

docs.google.com/document/u/0/d

Fenwick McKelveyfenwick
2024-01-12

Finding fake AI generated on Amazon is surprisingly fun and easy. After @verge coverage (theverge.com/2024/1/12/2403615) I found examples (amazon.ca/s?k=cannot+fulfill+t) but Amazon is quick to ban the search terms!

Fenwick McKelvey boosted:
2024-01-10

The 1st research article in our Special Issue on AI Controversies in Big Date & Society is out ! Guillaume Dandurand @fenwick and Jonathan Roberge show how legacy media freeze out AI's controversiality & demonstrate the force of controversy analysis as a critical approach #ShapingAI #STS @cka journals.sagepub.com/doi/10.11

Fenwick McKelvey boosted:
2024-01-09

All the scrollytelling* that's fit to publish**

theverge.com/c/23998379/google

* sorry
** seriously tho this is brilliant, great job The Verge

Fenwick McKelveyfenwick
2024-01-04

@danjokaz00ie Mid-career pivot to nuclear power?

Fenwick McKelvey boosted:
Meredith WhittakerMer__edith@mastodon.world
2024-01-03

This paper is really important, presenting empirical evidence of the imbrication bet. AI & the surveillance biz model. This is notable particularly given that most production surveillance tech is proprietary, its existence and use hidden from the public.

arxiv.org/abs/2309.15084?ref=4

Screenshot of the paper abstract, which says, "The Surveillance AI Pipeline
Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, Abeba Birhane
A rapidly growing number of voices argue that AI research, and computer vision in particular, is powering mass surveillance. Yet the direct path from computer vision research to surveillance has remained obscured and difficult to assess. Here, we reveal the Surveillance AI pipeline by analyzing three decades of computer vision research papers and downstream patents, more than 40,000 documents. We find the large majority of annotated computer vision papers and patents self-report their technology enables extracting data about humans. Moreover, the majority of these technologies specifically enable extracting data about human bodies and body parts. We present both quantitative and rich qualitative analysis illuminating these practices of human data extraction. Studying the roots of this pipeline, we find that institutions that prolifically produce computer vision research, namely elite universities and "big tech" corporations, are subsequently cited in thousands of surveillance patents. Further, we find consistent evidence against the narrative that only these few rogue entities are contributing to surveillance. Rather, we expose the fieldwide norm that when an institution, nation, or subfield authors computer vision papers with downstream patents, the majority of these papers are used in surveillance patents
Fenwick McKelveyfenwick
2024-01-02

@natematias Fantastic thread and so happy to read all the good news. Also always have time for updates about cycling trips!

Fenwick McKelveyfenwick
2023-12-22

Excited to have co-authored a major piece of the project. Our piece, Freezing out: Legacy media's shaping of AI as a cold controversy, that I think is a reflection on the dynamics that have made AI's controversiality uncontroversial in Canada.

Read it here: journals.sagepub.com/doi/10.11

And please share.

Thanks to Guillaume Dandurand and Jonathan Roberge too for co-writing!

cc @cka @NoortjeMarres

Fenwick McKelvey boosted:
Christian Katzenbachcka@aoir.social
2023-12-22

📣 The Conference Programme is live „Shifting AI Controversies“ Berlin, 29-30 JAN, 2024. hiig.de/en/events/conference-s.

🧑‍🏫Opening Panel "Where Do We Stand? Patterns of Thinking and Talking about AI" incl @SallyWyatt @LouiseAmoore
🎆Evening Panel "Not my Existential Risk! The Politics of Controversy in an Age of AI" incl @markscott @spielkamp @NoortjeMarres
🎤Closing Panel "Where Do We Go From Here? Future Trajectories of AI Controversies and Developments" incl @abpowell @info_activism @fenwick

Fenwick McKelvey boosted:
J. Nathan Matias 🦣natematias@social.coop
2023-12-21

Expect big debates over the use of "open source" datasets in AI development.

On one hand, LAION published thousands of examples of illegal CSAM and got them incorporated into AI training models- which is a huge fail for them and everyone else who trained on it.

On the other hand, the Stanford Internet Observatory was able to detect and alert people to this because it was an open dataset.

404media.co/laion-datasets-rem

Fenwick McKelveyfenwick
2023-12-15

@rwg @aram And here I am saying I quit Twitter to avoid all the toxic comments

Fenwick McKelvey boosted:

We are alarmed by this. Many people use dropbox for highly sensitive communication.

Who are Dropbox's third party partners for AI?

We only use partners whose privacy policies and commitment to our customer's rights and safety align with our own. 

At this time, we're partnered with one third party AI partner, Open AI. Open AI is an artificial intelligence research organization that develops cutting-edge language models and advanced AI technologies. Your data is never used to train their internal models, and is deleted from OpenAI's servers within 30 days. Read OpenAI's full privacy policy.
Fenwick McKelvey boosted:
Meredith WhittakerMer__edith@mastodon.world
2023-12-12

@ck @dalias @signalapp There's also a reckoning to be had within the FOSS community IMO, which in the 1990s took its eye off market actors even as it remained vigilant about government surveillance/overreach. The acceptance of corporate tech (and implicitly its surveillance business model), led by folks like ESR via the break from Free software to "open source," did a lot to get us here.

Fenwick McKelvey boosted:
Christian Katzenbachcka@aoir.social
2023-11-29

**From disinformation to hate speech: Platform Governance Archive (PGA) reveals how social platforms regulate our communication**

Have you seen this? We have relaunched the PGA, now a full-fledged resource for the research and expert community (👀 @platgov @AoIR)

🌎 We now cover 18 platforms
🏃🏻‍♂️ Continuous update of platform policy changes
💿 Datasets available for bulk download
📄 Two Data papers

This is a collaboration of @pgmt, @zemki , @hiig_berlin, @opentermsarchive@lescommuns.org.
hiig.de/en/platform-governance

Screenshot of the PGA Website
Fenwick McKelveyfenwick
2023-11-26

@jfmezei But only the new episode. I want some back episodes!

Fenwick McKelveyfenwick
2023-11-23

@jfmezei Where can I check if I am a subscriber?

Fenwick McKelvey boosted:
2023-11-23

>> Firefox supports a new “Copy Link Without Site Tracking” feature in the context menu which ensures that copied links no longer contain tracking information.

mozilla.org/en-US/firefox/120.

Fenwick McKelveyfenwick
2023-11-23

Parsing the fallout from the AI board split, try our paper! We argue that AI involves competing, but reenforcing, governmental projects, one existential, the other mundane. That latter seems to be winning, neither seems great.

Check it out:

Recursive power: AI governmentality and technofutures
by
Fenwick McKelvey and Jonathan Roberge
in
Handbook of Critical Studies of Artificial Intelligence

elgaronline.com/edcollchap/boo

Pre-print here: fenwickmckelvey.com/wp-content

Fenwick McKelveyfenwick
2023-11-23

I'm giving a talk today at 1pm called A Copy of What? ChatGPT and the Commons today as part of the Humber Presidential Lecture Series. You can live stream the talk here:

liberalarts.humber.ca/current-

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst