#algorithmicfairness

The Internet is Cracktheinternetiscrack
2025-07-13

Defining AI is a regulatory whac-a-mole.

Every time policymakers pin down what AI is, companies pivot to avoid scrutiny. Dr. Suresh Venkatasubramanian explains why this makes accountability so hard.

šŸ”— Listen here: youtu.be/GQiFnpK7Wyo

The Internet is Cracktheinternetiscrack
2025-07-05

Algorithmic ā€œFairnessā€ā€”Or Just a New Kind of Bias?

2024-05-30

TODAY: Join CDT’s Miranda Bogen for a PAI Partner Roundtable on Algorithmic Fairness & Demographic Data where she will be joining Eliza McCullough, Janet Haven, and Daniel Ho. Tune in LIVE at 12 ET. #AlgorithmicFairness #AI cdt.org/event/pai-partner-roun

I need some inspiration about getting out of corporates and transitioning to non-bullshit research or non profits.

I'd like to see some examples touching the topics ( #AIethics #AIResearch #responsibleAI #ML #MLeval #AlgorithmicFairness, etc.)!

Anyone knows anything about
Goethe's Fellowship-programme AI & Ethics? Or do you know anyone who could give more info? šŸ‘€

goethe.de/aiethics

#aiethics #algorithmicfairness #aifairness

Al & Ethics

In an interdisciplinary Europe-wide approach involving input talks, discussions and practical workshops, the Al & Ethics Summer School delved in 2022 into the ethical issues of Al applications and provides practical tools for identifying and addressing these issues.

In autumn 2023, we are expanding our offer in form of the Fellowship-programme Al & Ethics. You will find more information here soon.
Eike Petersenipet
2023-07-23

Importantly, standard solutions are strictly limited in what they can achieve in this regard: if the statistical relationship between inputs and outputs is simply more noisy in some group, no amount of "fair learning" can fix this!

In the paper (co-authored with Sune Holm, @melanieganzben1, Aasa Feragen), we discuss many more concrete medical examples of the different sources of bias, and we propose some tentative solution approaches. 6/N

A couple of years ago, I wrote about the seeds of bad algorithmic-assisted decision making products as I was reading "Why We Sleep".

As a friend was was reflecting on the book, I shared with her my views that I wrote in this blog post šŸ‘‡
onceupondata.com/post/how-do-h

#aiethics #aifairness #algorithmicfairness

How such ideas and theories become seeds of bad data products?

I believe there are good and bad ideas everywhere and everyone can share on any platform. But bad ideas are more dangerous when they come from scientists or figures with credibility because they are easier to adopt. So Looking into this as an example, I can see this path:

* A scientist who spent years studying a certain subject, wrote a book to communicate his research to people, emphasizing everything that supports his narrative even if it was a tiny sample study or an edited chart.

* The book gained a lot of attention and the scientist became a guest on many podcasts, conferences, google talks, etc. He even collaborated with google and this gave him more credibility.

* Many audience took everything they heard from the scientist as scientific facts or wisdom and started to propagate them around without filtering or rethinking some of the statements.

e With all the established credibility, whatever the scientist says has a lot of weight for many, even if it was about something far from his area of expertise (e.g. product design, computer vision, algorithmic fairness, etc.).

* The ideas and theories present a potential profitable business opportunity, and in a time where the hype about data, Al, wearables is at its peak, someone will pick these ideas and take them forward, mostly with no inherent bad intentions.Content of section "And why such products grow and get worse?" in https://www.onceupondata.com/post/how-do-harmful-algorithms-evolve/
rhgrouls :julia:rhgrouls@fosstodon.org
2023-04-07

How to fix this? The consequentialist framework (CF) to algorithmic fairness foregrounds the results of decisions, rather than properties of the prediction.

One starts by identifying the utility of different possible outcomes, eg efficiency and equity. Optimal decision policies can be derived with Linear Programming that uses stakeholder preferences.

This approach has advantages over static experimental designs (eg randomized trials)

#EthicalAI #MonthOfArxiv #AlgorithmicFairness

2023-02-09

The latest turn in the #algorithmicfairness debate is "leveling up":

wired.com/story/bias-statistic

Striking:

"Technical solutions are often only a Band-aid to deal with a broken system. Improving access to health care, curating more diverse data sets, and developing tools that specifically target the problems faced by historically disadvantaged communities can help make substantive fairness a reality."

Not: no technical solution at all but only within - may I say - a scoiotechnical system.

Ranjith JaganathanJRanjith@neuromatch.social
2023-02-09

Excerpts from the article:
The majority of algorithms developed to enforce ā€œalgorithmic fairnessā€ were built without #policy and societal contexts in mind.

Our motivation for pursuing fairness is to improve the situation of a historically disadvantaged group.

When we build AI systems to make decisions about people's lives, our design decisions encode implicit value judgments about what should be prioritized.

Technical solutions are often only a Band-aid to deal with a broken system. Improving access to #HealthCare, curating more diverse data sets, and developing tools that specifically target the problems faced by historically disadvantaged communities can help make substantive fairness a reality.

#AI systems make life-changing decisions. Choices about how they should be fair, and to whom, are too important to treat #fairness as a simple mathematical problem to be solved.

#AlgorithmicFairness #MedicalSystem #AIEthics #FairML #ArtificialIntelligence

Article:
HealthCare #Bias Is Dangerous. But So Are ā€˜Fairness’ #Algorithms

www-wired-com.cdn.ampproject.o

Paper:
The Unfairness of Fair #MachineLearning: Levelling down and strict egalitarianism by default

papers.ssrn.com/sol3/papers.cf

2023-02-08

#AIEthics #MachineLearning #ArtificialIntelligence #AlgorithmicFairness #Operationalization

---

"Operationalization"

It's not an easy word to say. Somehow I always end up putting an extra "z" in there. My friends find that quite amusing, though probably not as amusing as hearing me say "nuclear" in my native Midwestern.

2022-11-04

@TimnitGebru Hi there, this is very encouraging news! I've been looking for a network that specifically talks about topics in , and alike on mastodon as no doubt have others.

One of the big issues with decentralised social networks like this (that don't use a block-chain) is trust. DMs are not private and so it's super important that the administrator is trusted because they have access to everything, nothing is private to them. .

sorellesorelle
2022-10-31


Hi folks! I'm a computer scientist who cares about equity and where tech meets society ( ).

I currently do tech policy at the White House Office of Science and Technology Policy ( whitehouse.gov/ostp/ai-bill-of )

I'm also a at Haverford College, co-founder Conference, former Data & Society fellow.

Always happy to talk tech & society, , , and

Suresh Venkatasubramaniangeomblog@scholar.social
2022-10-30

#introduction well more like a re-introduction. I'm a prof at #BrownUniversity in #computerScience and #DataScience. I've been working on #algorithmicFairness for a while now and helped found the FAccT conference. Most recently I spent time at the White House helping write the #AIBillOfRights. At Brown I'm starting a new Center on Tech Responsibility.

angela zhouangelamczhou
2022-10-29


I'm Angela Zhou, new assistant professor at USC Marshall and

I work on data-driven decision-making under uncertainty:
enriching a point of view between and optimization

I'm also interested in substantive equity (). My technical research takes a pragmatic stance on this, i.e. challenges for disparity assessment and substantive work in CJ reform

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst