#AlgorithmicDiscrimination

2025-06-05

As housing gets more competitive, landlords and governments are outsourcing critical decisions to automated systems. But these tools often replicate old biases. Just faster and at scale.

In the US, SafeRent’s AI tool for tenant screening gave consistently lower scores to Black and Hispanic renters, and to people using housing vouchers - a legal form of income assistance. This is what we would call #AlgorithmicDiscrimination.

racismandtechnology.center/202

Report here: algorithmwatch.org/en/report-a

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-02-28

"After the entry into force of the Artificial Intelligence (AI) Act in August 2024, an open question is its interplay with the General Data Protection Regulation (GDPR). The AI Act aims to promote human-centric, trustworthy and sustainable AI, while respecting individuals' fundamental rights and freedoms, including their right to the protection of personal data. One of the AI Act's main objectives is to mitigate discrimination and bias in the development, deployment and use of 'high-risk AI systems'. To achieve this, the act allows 'special categories of personal data' to be processed, based on a set of conditions (e.g. privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The GDPR, however, seems more restrictive in that respect. The legal uncertainty this creates might need to be addressed through legislative reform or further guidance."

europarl.europa.eu/thinktank/e

#EU #AI #AIAct #GDPR #DataProtection #AlgorithmicDiscrimination #AlgorithmicBias #Privacy

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-11-30

"In October 2021, we sent a freedom-of-information request to the Social Insurance Agency attempting to find out more. It immediately rejected our request. Over the next three years, we exchanged hundreds of emails and sent dozens of freedom-of-information requests, nearly all of which were rejected. We went to court, twice, and spoke to half a dozen public authorities.

Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education.

Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer."

lighthousereports.com/investig

#Sweden #SocialInsurance #ChildSupport #Algorithms #AlgorithmicDiscrimination #AlgorithmicBias

2024-11-12

this "less personalized ads" feature from Meta is asinine -- they're still allowing "personalization" based on location, age and gender, even though 2 of the big practical reasons to turn "personalized" advertising _off_ are to avoid #elderFraud and #algorithmicDiscrimination

techcrunch.com/2024/11/12/euro

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-10-17

#FRANCE #CNAF #Algorithms #RiskScoring #AlgorithmicDiscrimination: "Fifteen French NGOs are suing the public body that distributes allowances for families, youth, housing, and inclusion (CNAF) at the French state council over the use of a risk-scoring algorithm, which impacts almost half of France's population, according to a Wednesday (16 October) press release.

This legal action follows the Court of Justice of the EU (CJEU) ruling that decision-making using scoring algorithms that use personal data is unlawful under the EU's data privacy regulation (GDPR).

The NGOs are calling on the state council to refer the case to the CJEU for a preliminary ruling. The case could take two to five years, depending on how the reference is handled.

"This algorithm mathematically reflects the discriminations already present in our society. It is neither neutral nor objective," said Marion Ogier, a lawyer at the Human Rights League, at a press conference in Paris on Wednesday.

Since 2010, the CNAF has been using an algorithm to select recipients for a review of their benefits. These credit checks are focused on cases deemed as 'higher risk' based on the recipient's profile and situation.

However, a number of local investigations published in December 2023 criticised these checks for not being truly random. Seventy per cent of 128,000 credit checks conducted in 2021 came from scoring algorithms, revealed CNAF in a 2022 report.

"The CNAF algorithm is just one part of the system. The public pension schemes, health insurance, and employment service all use similar algorithms,” Ogier added."

euractiv.com/section/tech/news

rexirexi
2024-07-23

whitehouse.gov/briefing-room/s
…history will show that this was the moment when we had the opportunity to lay the groundwork for the future of

A future where AI is used to advance and human dignity, where privacy is protected…where we make our democracies stronger and our world safer…
…to help make sure that the benefits of AI are shared equitably and to address predictable threats, including , violations, and

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-02-19

#AI #Recruiting #AlgorithmicBias #AlgorithmicDiscrimination: "Body-language analysis. Vocal assessments. Gamified tests. CV scanners. These are some of the tools companies use to screen candidates with artificial intelligence recruiting software. Job applicants face these machine prompts – and AI decides whether they are a good match or fall short.

Businesses are increasingly relying on them. A late-2023 IBM survey of more than 8,500 global IT professionals showed 42% of companies were using AI screening "to improve recruiting and human resources". Another 40% of respondents were considering integrating the technology.

Many leaders across the corporate world hoped AI recruiting tech would end biases in the hiring process. Yet in some cases, the opposite is happening. Some experts say these tools are inaccurately screening some of the most qualified job applicants – and concerns are growing the software may be excising the best candidates.

"We haven't seen a whole lot of evidence that there's no bias here… or that the tool picks out the most qualified candidates," says Hilke Schellmann, US-based author of the Algorithm: How AI Can Hijack Your Career and Steal Your Future, and an assistant professor of journalism at New York University. She believes the biggest risk such software poses to jobs is not machines taking workers' positions, as is often feared – but rather preventing them from getting a role at all."

bbc.com/worklife/article/20240

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-02-14

#USA #AI #Algorithms #AlgorithmicBias #AlgorithmicDiscrimination: "“AI is just a model that is trained on historical data,” said Naeem Siddiqi, senior advisor at SAS, a global AI and data company, where he advises banks on credit risk.

That’s fueled by the United States’ long history of discriminatory practices in banking towards communities of colour.

“If you take biased data, all AI or any model will do is essentially repeat what you fed it,” Siddiqui said.

“The system is designed to make as many decisions as possible with as less bias and human judgment as possible to make it an objective decision. This is the irony of the situation… of course, there are some that fall through the cracks,” Siddiqi added.

It’s not just on the basis of race. Companies like Apple and Goldman Sachs have even been accused of systemically granting lower credit limits to women over men.

These concerns are generational as well. Siddiqi says such denials also overwhelmingly limit social mobility amongst younger generations, like younger millennials (those born between 1981 and 1996) and Gen Z (those born between 1997 and 2012), across all demographic groups.

That’s because the standard moniker of strong financial health – including credit cards, homes and cars – when assessing someone’s financial responsibility is becoming increasingly less and less relevant. Only about half of Gen Z have credit cards. That’s a decline from all generations prior."

aljazeera.com/economy/2024/2/1

Webappiawebappia
2023-06-23

AI’s banking discrimination can have devastating consequences for all. 

Hashtags: Summery: Artificial intelligence (AI) algorithms used in banking and financial services face significant risks of discrimination and bias. AI systems can amplify existing human biases if not properly developed and trained. Biases in data and development teams can perpetuate cycles of…

webappia.com/ais-banking-discr

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2023-04-24

#EU #Netherlands #Algorithms #AlgorithmicDiscrimination: "New, leaked documents obtained by Lighthouse Reports and NRC reveal that at the same time Hoekstra was promising change, officials were sounding the alarm over a secretive algorithm that ethnically profiles visa applicants. They show the agency’s own data protection officer — the person tasked with ensuring its use of data is legal — warning of potential ethnic discrimination. Despite these warnings, the ministry has continued to use the system.

Unknown to the public, the Ministry of Foreign Affairs has been using a profiling system to calculate the risk score of short-stay visa applicants applying to enter the Netherlands and Schengen area since 2015.

An investigation by Lighthouse and NRC reveals that the ministry’s algorithm, referred to internally as Informatie Ondersteund Beslissen (IOB), has profiled millions of visa applicants using variables like nationality, gender and age. Applicants scored as ‘high risk’ are automatically moved to an “intensive track” that can involve extensive investigation and delay."

lighthousereports.com/investig

sorellesorelle
2023-02-16

"when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex (including based on pregnancy, childbirth, and related conditions; gender identity; intersex status; and sexual orientation) religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, or any other classification protected by law."

whitehouse.gov/briefing-room/p

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2023-01-24

#Algorithms #AlgorithmicDiscrimination #Labor #GigEconomy: "Recent technological developments related to the extraction and processing of data have given rise to widespread concerns about a reduction of privacy in the workplace. For a growing number of low-income and subordinated racial minority work forces in the United States, however, on-the-job data collection and algorithmic decision-making systems are having a much more profound yet overlooked impact: these technologies are fundamentally altering the experience of labor and undermining the possibility of economic stability and mobility through work. Drawing on a multi-year, first-of-its-kind ethnographic study of organizing on-demand workers, this Article examines the historical rupture in wage calculation, coordination, and distribution arising from the logic of informational capitalism: the use of granular data to produce unpredictable, variable, and personalized hourly pay. Rooted in worker on-the-job experiences, I construct a novel framework to understand the ascent of digitalized variable pay practices, or the transferal of price discrimination from the consumer to the labor context, what I identify as algorithmic wage discrimination.

Across firms, the opaque practices that constitute algorithmic wage discrimination raise central questions about the changing nature of work and its regulation under informational capitalism. Most centrally, what makes payment for labor in platform work fair? How does algorithmic wage discrimination change and affect the experience of work? And, considering these questions, how should the law intervene in this moment of rupture?"

papers.ssrn.com/sol3/papers.cf

Many of us are here b/c we could no longer tolerate the situation on #Twitter - but important to note that other social graph exploitation networks are still out there (#Facebook, #LinkedIn, #Reddit) and also need to be reigned in.

To that end, today I instructed my lawyers to serve litigation against LinkedIn for defamation based on their shadow banning activities.

I will provide updates here, moving forward.

#SurveillanceCapitalism #ethics #ShadowBan #defamation #AlgorithmicDiscrimination

2022-11-24

Hi! #introduction

I'm a Ph.D. student in #artificialintelligence, based in northern Italy. My research focuses on #AlgorithmicDiscrimination, that is when evil computers do evil things on humans.
Think of it like algorithms excluding minorities, more than mecha-Hitler destroying cities. (Yep, the name sounds better, but the content is pretty good too).

I'm interested in #sciencecommunication, especially on the #AI and #NLP sides. We'll see if I can make something good out of this profile 🙃

2018-09-03

forthcoming:

Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer, Hito Steyerl

Pattern Discrimination

University of Minnesota Press / Meson Press (Open Access)

Algorithmic identity politics reinstate old forms of social segregation — in a digital world, identity politics is pattern discrimination. It is by recognizing patterns in input data that Artificial Intelligence algorithms create bias and practice racial exclusions thereby inscribing power relations into media. How can we filter information out of data without reinserting
racist, sexist, and classist beliefs?

ISBN 978-1-51790-645-0

#mediastudies #newmedia #algorithms #bigdata #discrimination #algorithmicdiscrimination #analytics #ai #racism

2018-06-07

On June 20, an EU committee will vote on an apocalyptically stupid, internet-destroying copyright proposal that'll censor everything from Tinder profiles to Wikipedia (SHARE THIS!) boingboing.net/2018/06/07/than #algorithmicdiscrimination #freeexpression #surveillance #censorship #article11 #article13 #contentid #Copyfight #axelvoss #Post #gdpr #eu

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst