The human-robot moral judgment asymmetry effect was present in Studies 1a and 1b; when a robot makes the decision to turn off the life support, its decision is less appreciated than an otherwise identical human decision. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.
There is a clear linear trend from AI-AI teaming to Human-Human teaming in moral approval of a passive euthanasia decision (2a: B = 0.59, 95 % CI: [0.39, 0.80], F(1,282) = 32.27, p < .001; 2b:B = 0.75, 95 % CI: [0.51, 0.98], F(1,403) = 38.55, p < .001). In Study 2b, there is a clear drop between the condition where the recommender and the decision-maker are both people, compared to the other two conditions. Jittered gray data points are individual observations; larger blue points are group-wise means. Error bars are 95 % CIs.
The Withdrawing condition is the only one without a statistically significant difference between conditions – this case is the most analogous to our previous vignettes, with the exception of the patient being conscious. In all other cases, the differences between the human and robot doctor are statistically significant. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. Error bars are 95 % CIs.
Once Competence is taken into consideration, the asymmetry effect shifts to keeping the life-support system on in the high competence conditions. Jittered data points represent individual observations; larger blue, red, and black points are group-wise means. The error bars are 95 % CIs.
Withdrawing life support was judged more harshly when medical #AI made the decision or recommendation than when human clinicians made it.
When patients were conscious or AI seemed more competent, the #algorithmAversion #bias faded.
doi.org/10.1016/j.co...
#medicine #bioethics