@thisismissem Your feedback and perspectives were so, so, so valuable. Thank you!
Internet sanitation enthusiast and full-time corgi wrangler.
@thisismissem Your feedback and perspectives were so, so, so valuable. Thank you!
Government jawboning of tech platforms is a serious issue, and we need serious jurisprudence that draws appropriate limits on government conduct.
The 5th Circuit, regrettably, hasn't given us that in Missouri v. Biden. I wrote about how the problems with the case start with getting the facts wrong. https://knightcolumbia.org/blog/getting-the-facts-straight-some-observations-on-the-fifth-circuit-ruling-in-missouri-v-biden-1
@bookish @jeffjarvis No comment. đ
Since 2020, Iâve been at the epicenter of a wide-reaching campaign to intimidate social media platforms into backing away from their investments in safety and security. It worked.
The whole piece is spectacular, but this bit in particular is just a perfect illustration of the axiom that the amount of effort required to refute bullshit is orders of magnitude greater than the effort required to produce it. https://www.theguardian.com/books/2023/aug/26/naomi-klein-naomi-wolf-conspiracy-theories
I had a great conversation with @ethanz which you can read/listen to here.
IFTAS Issues New Moderator Trust and Safety Needs Assessment Survey
Announcement: https://about.iftas.org/2023/08/09/iftas-federated-trust-and-safety/
Moderator Needs Assessment: https://cryptpad.fr/form/#/2/form/view/thnEBypiNlR6qklaQNmWAkoxxeEEJdElpzM7h2ZIwXA/
#mastoadmin #MastoMods #Moderation #TrustAndSafety @moderation
Nvidia has started asking large GPU buyers who their end users are (per The Information): https://www.theinformation.com/articles/in-an-unusual-move-nvidia-wants-to-know-its-customers-customers?utm_source=ti_app&rc=gomoa3
Their focus seems to be on picking favorites in a crowded market, but this could actually be an interesting hook for "know your customer" diligence to prevent horrific abuses of AI, if they cared to do that.
Me at every work-related conference I've ever been to
This is one of those cool insights at Metaâs scale that also only tells part of the story:
Are 90% of their reports visually similar because theyâre failing to find and report stuff that ISNâT visually similar?
Itâs easier to chase the threats youâre familiar with than to find novel ones.
Interesting tidbit from Meta staff at #TrustCon23 just now: >90% of the CSAM Meta report to NCMEC is visually similar to content theyâve reported before.
The argument goes: The same bad content circulates again and again, so effective moderation requires you to get very good at similarity detection.
Fascinating decision by the Oversight Board telling Meta to ban the Cambodian Prime Minister for inciting violence: https://oversightboard.com/news/656303619335474-oversight-board-overturns-meta-s-decision-in-cambodian-prime-minister-case/
Meta have said theyâll comply.
Itâs worth noting that the content in question was posted in January, and itâs now, *checks notes*, uh, June.
So itâs not clear that the Oversight Board process is totally working (yet?) as a way to make difficult, life-and-death decisions in the timeframe trust and safety work requires.
The Atlantic Council Task Force for a Trustworthy Future Web, led by @rightsduff, put out its comprehensive report today: https://www.atlanticcouncil.org/in-depth-research-reports/report/scaling-trust/
I had the privilege of leading work on one of the report's annexes, specifically focused on securing federated platforms: https://www.atlanticcouncil.org/in-depth-research-reports/report/scaling-trust_annex5/
Bottom line up front: We're missing some key policy, technical, and institutional pieces right now, but these are solvable challenges.
@mmitchell_ai @huggingface @giadap These are so, so good. Iâm a huge fan of the clear articulation of consequences in the policy itself, too, and how readable it all is. đđđ
Today's podcast is a fun one with @yoyoel talking about the challenges of doing trust & safety on a decentralized/federated system... Potentially of interest to folks here. https://www.techdirt.com/2023/06/13/techdirt-podcast-episode-354-decentralizing-content-moderation/
Now, a note on the Fediverse: SG-CSAM is not really a thing on here (other kinds are â thanks Japan), but the reason is simple: the Fediverse isn't popular enough to make it profitable. So don't gloat about this just yet, there are massive T&S issues that the Fediverse is going to get hit with that it is extremely ill-prepared for. As far as I know, no instance even has table stakes CSAM protections. Get on it.
Itâs unconscionable that Twitter would deploy what remains of its legal team to bully academics into paying an obscene ransom in order to keep access to essential data. https://inews.co.uk/news/twitter-researchers-delete-data-unless-pay-2364535
The best hope for stopping this is regulatory action, particularly under the DSA.
Groups like the Coalition for Independent Technology Research are helping lobby on behalf of researchers. Learn more and get involved: https://independenttechresearch.org/
This piece about industrialized catfishing services is absolutely fascinating. Exploitation at every level: of the clients, and of the âfreelancers.â https://arstechnica.com/culture/2023/05/this-is-catfishing-on-an-industrial-scale/
Okay folks: this morning we're launching something we think is pretty useful. Lots of people have strong opinions on how content moderation should work, but they've never done it. So we built a content moderation mobile game (browser-based): https://moderatormayhem.engine.is/
Details about it are here: https://www.techdirt.com/2023/05/11/moderator-mayhem-a-mobile-game-to-see-how-well-you-can-handle-content-moderation/
@conspirator0 Interesting to see that the fake accounts seem to largely be on mastodon[.]social. Iâd have expected them to distribute across more instances.