Algorithmic governance: Developing a research agenda through the power of collective intelligence
journals.sagepub.com/doi/full/10....
Algorithmic governance: Developing a research agenda through the power of collective intelligence
journals.sagepub.com/doi/full/10....
Article information Title: Boosting human competences with interpretable and explainable artificial intelligence. Full citation: Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. https://doi.org/10.1037/dec0000250 Abstract: Artificial intelligence (AI) is becoming integral to many areas of life, yet many—if not most—AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes—otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent—by design—faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people’s competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and—because of XAI’s drawbacks—preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.
🌟🤖📝 **Boosting human competences with interpretable and explainable artificial intelligence**
How can AI *boost* human decision-making instead of replacing it? We talk about this in our new paper.
doi.org/10.1037/dec0...
#AI #XAI #InterpretableAI #IAI #boosting #competences
🧵👇
Love this write-up by Dan Gardner.
dgardner.substack.com/p/the-diseas...
The interesting bit for me is that perceptions of value remained higher the next day for the high-value-sooner boxes when participants were asked to evaluate soon after opening. I think this has applicability to product development in regards to how quickly we ask people for feedback.
While genAI is interesting at the consumer level, deep learning applications like this are where AI really shines.
Bureaucracy is so fascinating. Since groups always seem to drift toward it like they're pursuing equilibrium, it must provide a benefit. But what?
It is exceedingly difficult to convince large orgs that revolutionary innovation is necessary. There is so much internal anxiety around cannabalization and bureaucracy that nothing gets done. Very interesting to hear that Kodak invented the digital camera!