#AIDiscrimination

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-06-23

"The bill aims to prevent states from regulating emerging technologies that have genuine benefits, but that also are putting their constituents at risk in very real ways, including those with disabilities. Indeed, implementing AI tools in decision-making systems in high-stakes contexts like employment, education and healthcare presents a particularly high risk of harm for people with disabilities. These risks have been previously referred to as “tech-facilitated disability discrimination” — an umbrella term that encapsulates all of the ways that AI and other emerging technologies can cause people to experience discrimination on the basis of their disability. While existing disability rights statutes (like the Americans with Disabilities Act) do provide some means of legal recourse for this type of discrimination, state-level AI regulation remains a vital avenue for harm mitigation."

techpolicy.press/the-proposed-

#USA #AI #Trump #AIPolicy #AIDiscrimination #AIBias #Disability #DisabilityRights

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-05-20

"In an experiment involving 22 leading LLMs and 70 popular professions, each model was systematically given a job description along with a pair of profession-matched CVs (one including a male first name, and the other a female first name) and asked to select the more suitable candidate for the job. Each CV pair was presented twice, with names swapped to ensure that any observed preferences in candidate selection stemmed from gendered names cues. The total number of model decisions measured was 30,800 (22 models × 70 professions × 10 different job descriptions per profession × 2 presentations per CV pair). The following figure illustrates the essence of the experiment.

Despite identical professional qualifications across genders, all LLMs consistently favored female-named candidates when selecting the most qualified candidate for the job. Female candidates were selected in 56.9% of cases, compared to 43.1% for male candidates (two-proportion z-test = 33.99, p < 10⁻252 ). The observed effect size was small to medium (Cohen’s h = 0.28; odds=1.32, 95% CI [1.29, 1.35]). In the figures below, asterisks (*) indicate statistically significant results (p < 0.05) from two-proportion z-tests conducted on each individual model, with significance levels adjusted for multiple comparisons using the Benjamin-Hochberg False Discovery Rate correction."

davidrozado.substack.com/p/the

#AI #GenerativeAI #LLMs #GenderBias #AIDiscrimination #AIBias #HR

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-03-22

"We, the undersigned researchers, affirm the scientific consensus that artificial intelligence (AI) can exacerbate bias and discrimination in society, and that governments need to enact appropriate guardrails and governance in order to identify and mitigate these harms. [1]

Over the past decade, thousands of scientific studies have shown how biased AI systems can violate civil and human rights, even if their users and creators are well-intentioned. [2] When AI systems perpetuate discrimination, their errors make our societies less just and fair. Researchers have observed this same pattern across many fields, including computer science, the social sciences, law, and the humanities. Yet while scientists agree on the common problem of bias in AI, the solutions to this problem are an area of ongoing research, innovation, and policy.

These facts have been a basis for bipartisan and global policymaking for nearly a decade. [3] We urge policymakers to continue to develop public policy that is rooted in and builds on this scientific consensus, rather than discarding the bipartisan and global progress made thus far."

aibiasconsensus.org/

#AI #AIBias #AIDiscrimination #Algorithms #ResponsibleAI

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-05-31

#AI #HR #USA #CivilRights #AIDiscrimination: "The American Civil Liberties Union alleged in a complaint to regulators that a large consulting firm is selling AI-powered hiring tools that discriminate against job candidates on the basis of disability and race, despite marketing these services to businesses as “bias free.”

Aon Consulting, Inc., a firm that works with Fortune 500 companies and sells a mix of applicant screening software, has made false or misleading claims that its tools are “fair,” free of bias and can “increase diversity,” the ACLU alleged in a complaint to the US Federal Trade Commission on Wednesday, a copy of which was reviewed by Bloomberg.

In its complaint, the ACLU said Aon’s algorithmically driven personality test, ADEPT-15, relies on questions that adversely impact autistic and neurodivergent people, as well as people with mental health disabilities. Aon also offers an AI-infused video interviewing system and a gamified cognitive assessment service that are likely to discriminate based on race and disability, according to the complaint.

The ACLU is calling on the FTC to open an investigation into Aon’s practices, issue an injunction and provide other necessary relief to affected parties."

bloomberg.com/news/articles/20

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2023-10-24

#UK #AI #AIBias #Algorithms #Fraud #PublicSector #AIDiscrimination: "The DWP has been using AI to help detect benefits fraud since 2021. The algorithm detects cases that are worthy of further investigation by a human and passes them on for review.

In response to a freedom of information request by the Guardian, the DWP said it could not reveal details of how the algorithm works in case it helps people game the system.

The department said the algorithm does not take nationality into account. But because these algorithms are self-learning, no one can know exactly how they do balance the data they receive.

The DWP said in its latest annual accounts that it monitored the system for signs of bias, but was limited in its capacity to do so where it had insufficient user data. The public spending watchdog has urged it to publish summaries of any internal equality assessments."

theguardian.com/technology/202

2020-05-04

I suffered a lot of #AIDiscrimination today again.

Two of my customers acounts got blocked today by google and microsoft. coz i am administrating them and coz i am accessing it by an other pattern than normal users.

Why are they blocked? Why google is not even trusting me that i am valid when i confirm the account by a, not at google hosted mail?

Can both Corps please at least publish what they expect as normal user behavior?

Must be #AIdiscrimination ...

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst