Trending

#AlgorithmAversion

Latest posts tagged with #AlgorithmAversion on Bluesky

Latest Top
Trending

Posts tagged #AlgorithmAversion

It's the final day of the 2025 Human & Artificial Rationalities conference.

First @oriplonsky.bsky.social shared experiments finding that people preferred advice that aligned with their own biases, even if the advice was from an algorithm — contrary to #algorithmAversion.

bsky.app/profile/byrd...

0 0 1 0
Preview
As the status quo shifts, we’re becoming more forgiving when algorithms mess up | The-14 As AI becomes the norm, people are judging its mistakes less harshly, showing our tolerance shifts when new technologies replace the old status quo.

As the status quo shifts, we’re becoming more forgiving when algorithms mess up
#Tech #AI #ArtificialIntelligence #AlgorithmAversion #AIEthics #TechnologyShift #HumanVsMachine #FutureOfAI #AlgorithmBias #TechTrends #MachineLearning #CognitivePsychology
the-14.com/as-the-statu...

1 1 1 0
The human-robot moral judgment asymmetry effect was present in Studies 1a and 1b; when a robot makes the decision to turn off the life support, its decision is less appreciated than an otherwise identical human decision. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.

The human-robot moral judgment asymmetry effect was present in Studies 1a and 1b; when a robot makes the decision to turn off the life support, its decision is less appreciated than an otherwise identical human decision. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.

There is a clear linear trend from AI-AI teaming to Human-Human teaming in moral approval of a passive euthanasia decision (2a: B = 0.59, 95 % CI: [0.39, 0.80], F(1,282) = 32.27, p < .001; 2b:B = 0.75, 95 % CI: [0.51, 0.98], F(1,403) = 38.55, p < .001). In Study 2b, there is a clear drop between the condition where the recommender and the decision-maker are both people, compared to the other two conditions. Jittered gray data points are individual observations; larger blue points are group-wise means. Error bars are 95 % CIs.

There is a clear linear trend from AI-AI teaming to Human-Human teaming in moral approval of a passive euthanasia decision (2a: B = 0.59, 95 % CI: [0.39, 0.80], F(1,282) = 32.27, p < .001; 2b:B = 0.75, 95 % CI: [0.51, 0.98], F(1,403) = 38.55, p < .001). In Study 2b, there is a clear drop between the condition where the recommender and the decision-maker are both people, compared to the other two conditions. Jittered gray data points are individual observations; larger blue points are group-wise means. Error bars are 95 % CIs.

The Withdrawing condition is the only one without a statistically significant difference between conditions – this case is the most analogous to our previous vignettes, with the exception of the patient being conscious. In all other cases, the differences between the human and robot doctor are statistically significant. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.

The Withdrawing condition is the only one without a statistically significant difference between conditions – this case is the most analogous to our previous vignettes, with the exception of the patient being conscious. In all other cases, the differences between the human and robot doctor are statistically significant. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.

Once Competence is taken into consideration, the asymmetry effect shifts to keeping the life-support system on in the high competence conditions. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. The error bars are 95 % CIs.

Once Competence is taken into consideration, the asymmetry effect shifts to keeping the life-support system on in the high competence conditions. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. The error bars are 95 % CIs.

Withdrawing life support was judged more harshly when medical #AI made the decision or recommendation than when human clinicians made it.

When patients were conscious or AI seemed more competent, the #algorithmAversion #bias faded.

doi.org/10.1016/j.co...

#medicine #bioethics

3 0 1 0
Methods and initial result (pages 7 and 8).

Methods and initial result (pages 7 and 8).

Other results from Study 1 (pages 9 and 10)

Other results from Study 1 (pages 9 and 10)

Results from Study 2 (pages 14 and 16)

Results from Study 2 (pages 14 and 16)

One more plot from Study 2 and the beginning of the discussion section (pages 17 and 18)

One more plot from Study 2 and the beginning of the discussion section (pages 17 and 18)

#AlgorithmAversion is a tendency to judge errors in automated decisions more harshly than errors in human decisions.

Telling people a decision is typically made by machines eliminated or even reversed the #bias.

🔓 doi.org/10.1017/jdm....

#AI #cogSci #xPhi #business #edu #tech

18 7 0 0
The presentation abstract

The presentation abstract

The setup of experiment 3

The setup of experiment 3

Results of experiment 3: "Biased humans like biased advice" (not human advice per se)

Results of experiment 3: "Biased humans like biased advice" (not human advice per se)

Summary (pasted from slide):

• Experts gaining expertise by experience often give biased advice: They both choose and recommend others to choose options better most of the time
• People prefer advisors that recommend options better most of the time
• Biased humans advisors are preferred over unbiased algorithmic advisors
• But it is simple to design even more biased algorithmic advisors that accommodate human biases, and are liked more than human advisors

Summary (pasted from slide): • Experts gaining expertise by experience often give biased advice: They both choose and recommend others to choose options better most of the time • People prefer advisors that recommend options better most of the time • Biased humans advisors are preferred over unbiased algorithmic advisors • But it is simple to design even more biased algorithmic advisors that accommodate human biases, and are liked more than human advisors

Ori Plonsky et al. found people liked biased advice in #expectedValue experiments.

Contrary to #algorithmAversion, people liked advice that aligned with their #biases (even if it came from an algorithm).

To learn when these data are published, follow Dr. Plonsky: scholar.google.com/citations?hl...

1 0 1 1