Devvin's Avatar

Devvin

@dearnest

Innovation. Product Development. Systems.

17
Followers
305
Following
5
Posts
04.11.2024
Joined
Posts Following

Latest posts by Devvin @dearnest

Preview
Algorithmic governance: Developing a research agenda through the power of collective intelligence - John Danaher, Michael J Hogan, Chris Noone, Rónán Kennedy, Anthony Behan, Aisling De Paor, Heike Fel... We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour ...

Algorithmic governance: Developing a research agenda through the power of collective intelligence

journals.sagepub.com/doi/full/10....

24.11.2024 22:22 👍 3 🔁 3 💬 0 📌 0
Article information

Title: Boosting human competences with interpretable and explainable artificial intelligence.

Full citation: Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. https://doi.org/10.1037/dec0000250

Abstract: Artificial intelligence (AI) is becoming integral to many areas of life, yet many—if not most—AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes—otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent—by design—faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people’s competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and—because of XAI’s drawbacks—preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.

Article information Title: Boosting human competences with interpretable and explainable artificial intelligence. Full citation: Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. https://doi.org/10.1037/dec0000250 Abstract: Artificial intelligence (AI) is becoming integral to many areas of life, yet many—if not most—AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes—otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent—by design—faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people’s competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and—because of XAI’s drawbacks—preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.

🌟🤖📝 **Boosting human competences with interpretable and explainable artificial intelligence**

How can AI *boost* human decision-making instead of replacing it? We talk about this in our new paper.

doi.org/10.1037/dec0...

#AI #XAI #InterpretableAI #IAI #boosting #competences
🧵👇

20.11.2024 12:25 👍 73 🔁 23 💬 4 📌 3
Preview
The Disease of the Powerful Intellectual arrogance cripples the powerful but it is a danger to us all.

Love this write-up by Dan Gardner.

dgardner.substack.com/p/the-diseas...

19.11.2024 23:00 👍 0 🔁 0 💬 0 📌 0

The interesting bit for me is that perceptions of value remained higher the next day for the high-value-sooner boxes when participants were asked to evaluate soon after opening. I think this has applicability to product development in regards to how quickly we ask people for feedback.

19.11.2024 08:03 👍 1 🔁 0 💬 1 📌 0

While genAI is interesting at the consumer level, deep learning applications like this are where AI really shines.

19.11.2024 06:57 👍 2 🔁 0 💬 0 📌 0

Bureaucracy is so fascinating. Since groups always seem to drift toward it like they're pursuing equilibrium, it must provide a benefit. But what?

18.11.2024 21:45 👍 0 🔁 0 💬 0 📌 0

It is exceedingly difficult to convince large orgs that revolutionary innovation is necessary. There is so much internal anxiety around cannabalization and bureaucracy that nothing gets done. Very interesting to hear that Kodak invented the digital camera!

17.11.2024 17:01 👍 1 🔁 0 💬 1 📌 0