#AlgorithmAversion

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2025-06-02

Withdrawing life support was judged more harshly when medical #AI made the decision or recommendation than when human clinicians made it.

When patients were conscious or AI seemed more competent, the #algorithmAversion #bias faded.

doi.org/10.1016/j.cognition.20

#medicine #bioethics

The human-robot moral judgment asymmetry effect was present in Studies 1a and 1b; when a robot makes the decision to turn off the life support, its decision is less appreciated than an otherwise identical human decision. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.There is a clear linear trend from AI-AI teaming to Human-Human teaming in moral approval of a passive euthanasia decision (2a: B = 0.59, 95 % CI: [0.39, 0.80], F(1,282) = 32.27, p < .001; 2b:B = 0.75, 95 % CI: [0.51, 0.98], F(1,403) = 38.55, p < .001). In Study 2b, there is a clear drop between the condition where the recommender and the decision-maker are both people, compared to the other two conditions. Jittered gray data points are individual observations; larger blue points are group-wise means. Error bars are 95 % CIs.The Withdrawing condition is the only one without a statistically significant difference between conditions – this case is the most analogous to our previous vignettes, with the exception of the patient being conscious. In all other cases, the differences between the human and robot doctor are statistically significant. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.Once Competence is taken into consideration, the asymmetry effect shifts to keeping the life-support system on in the high competence conditions. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. The error bars are 95 % CIs.
Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2025-05-07

#AlgorithmAversion is a tendency to judge errors in automated decisions more harshly than errors in human decisions.

Telling people a decision is typically made by machines eliminated or even reversed the #bias.

🔓 doi.org/10.1017/jdm.2025.8

#AI #cogSci #xPhi #business #edu #tech

Methods and initial result (pages 7 and 8).Other results from Study 1 (pages 9 and 10)Results from Study 2 (pages 14 and 16)One more plot from Study 2 and the beginning of the discussion section (pages 17 and 18)
Flo Keppelerflokeppeler
2023-05-21

😱 Disclosure: The shows that disclosing the use of the AI application leads to significantly less interest in an offer among job candidates (compared to no information).
⚙ Deployment: Results indicate that the person–job fit determined by the leaders can be predicted by the AI application. However, both assessments (from the human and the AI applications) may have different forms of gender biases. More research needed. (2/2)

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst