Interpretable Thermodynamic Score-based Classification of Relaxation Excursions
Goyal, Y. et al.
Paper
Details
#Thermodynamics #MachineLearning #InterpretableAI
Latest posts tagged with #InterpretableAI on Bluesky
Interpretable Thermodynamic Score-based Classification of Relaxation Excursions
Goyal, Y. et al.
Paper
Details
#Thermodynamics #MachineLearning #InterpretableAI
How can #AI make seizure detection more transparent? 🧠
Explore how a Variational Autoencoder helps interpret #SEEG data for #Epilepsy care — blending precision and explainability.
👉 Read more: www.neuroelectrics.com/blog/variati...
#SeizureDetection #DeepLearning #InterpretableAI #Neurotech
Sum-of-Parts Framework Boosts Interpretable Neural Networks
The Sum-of-Parts (SOP) framework turns any differentiable model into a neural network, learning feature groups and achieving state-of-the-art results on vision and language benchmarks. Read more: getnews.me/sum-of-parts-framework-b... #interpretableai #sann
Interpretable Basis Extraction for Visual AI Explanations
Scientists unveiled a technique that extracts a sparse basis from CNN feature spaces, improving interpretability without manual labels. Tests on ResNet and VGG matched probing performance. Read more: getnews.me/interpretable-basis-extr... #interpretableai #ai
Study Deciphers Vision Transformers via Residual Replacement
Researchers mapped 6.6 K ViT features via sparse autoencoders and proposed a residual replacement model that swaps updates for interpretable linear combos. Read more: getnews.me/study-deciphers-vision-t... #visiontransformers #interpretableai
Neural Logic Networks Boost Interpretable AI Classification
Neural Logic Networks now support NOT gates and bias terms, enabling transparent IF‑THEN rules for tabular data. The open‑source code was released on 11 Aug 2025. getnews.me/neural-logic-networks-bo... #neurallogicnetworks #interpretableai #opensource
Dive deeper into the details here: buff.ly/W0SKVbn
#MedicalAI #InterpretableAI #HealthcareTech #ActuarialScience #TrustworthyAI
'Deep Learning-Enabled Interpretable Down Syndrome Detection Model' - research sponsored by the King Salman Center for #DisabilityResearch - on #ScienceOpen:
🖇️ #DownSyndrome #AIinMedicine #InterpretableAI #MedicalDiagnostics
Paper: nature.com/articles/s4200…
Code: github.com/ohsu-cedar-com���
#InterpretableML #AIinCancer #SingleCell #MultiOmics #InterpretableAI #CancerGenomics #OpenScience #Bioinformatics #ComputationalBiology #MachineLearning
@commsbio.nature.com @ohsunews.bsky.social @ohsuknight.bsky.social
My book 'Mastering Modern Time Series Forecasting : The Complete Guide to Statistical, Machine Learning & Deep Learning Models in Python' -> valeman.gumroad.com/...
#timeseries #machinelearning #forecasting #shapelets #interpretableAI #predictivemaintenance #neuralnetworks
Multicenter Evaluation of Interpretable AI for Coronary Artery Disease Diagnosis from PET Biomarkers
Acampa, W., Barrett, L. et al.
Paper
Details
#InterpretableAI #CardiologyResearch #PETBiomarkers
3️⃣Interpretability through explicit reasoning traces
🔎 Models produce detailed “thought processes” that explain why a continuation is stereotypical or not, enabling transparency and easier auditing of AI bias.
/5
#InterpretableAI #SocialBias
ICE-T is a new prompting method that boosts AI accuracy and transparency, outperforming zero-shot learning—especially in regulated, high-stakes fields.
#interpretableai
Explore ICE-T method limitations, future research directions, and reproducibility details for enhancing LLM binary classification accuracy and interpretability. #interpretableai
Explore the ICE-T method’s key questions used for patient assessment across drug abuse, alcohol use, medical decisions, and other clinical tasks. #interpretableai
ICE-T outperforms zero-shot methods, significantly boosting µF1 scores in GPT-3.5 and GPT-4 across diverse classification tasks and datasets. #interpretableai
LLMs generate yes/no questions to improve binary classification. We test classifiers and analyze µF1 performance using GPT-4 and GPT-3.5 outputs. #interpretableai
Explore 3 labeled NLP datasets for binary classification: medical advice, human rights violations, and unfair contract terms in online ToS. #interpretableai
Explore annotated datasets used for text classification across domains—medical records, climate reports, and political tweets on Catalan independence. #interpretableai
Learn how to prompt LLMs, convert their outputs into feature vectors, and train a classifier using verbalized responses for predictive tasks.
#interpretableai
Learn how the ICE-T system trains language models using yes/no questions, converting answers into feature vectors for classifier training. #interpretableai
LLMs struggle with interpretability, overconfidence, and flawed explanations—key hurdles to using them in high-stakes domains like medicine or science. #interpretableai
Explore advanced prompting and in-context learning strategies that improve the reasoning and performance of large language models during inference.
#interpretableai
ICE-T uses multiple LLM prompts combined with traditional classifiers to improve AI classification performance with high interpretability in medicine and law.
#interpretableai
#KANs #iTFKAN #TimeSeriesForecasting #InterpretableAI #DeepLearning #Transformers #MachineLearning #Forecasting #MLInnovation #SOTA
6/ What about interpretable models? They are often more trustworthy but often cannot be trained by federated learning. 🌲 FedCT doesn't discriminate against interpretable models. Works with decision trees, XGBoost, etc. Quality similar to centralized training. #InterpretableAI
How can we interpret what features LLMs use to perform a given task? 🤖💭 And how do we know if our interpretation is correct? 🤔🔬
Excited to be presenting 2 papers + oral on these questions in the #InterpretableAI workshop at #neurips2024 📢 -- come by our posters/talk to hear more!
12/ Let’s rethink the future of human-AI collaboration. 🤝
Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. doi.org/10.1037/dec0...
#AI #XAI #InterpretableAI #IAI #boosting #competences
Article information Title: Boosting human competences with interpretable and explainable artificial intelligence. Full citation: Herzog, S. M., & Franklin, M. (2024). Boosting human competences with interpretable and explainable artificial intelligence. Decision, 11(4), 493–510. https://doi.org/10.1037/dec0000250 Abstract: Artificial intelligence (AI) is becoming integral to many areas of life, yet many—if not most—AI systems are opaque black boxes. This lack of transparency is a major source of concern, especially in high-stakes settings (e.g., medicine or criminal justice). The field of explainable AI (XAI) addresses this issue by explaining the decisions of opaque AI systems. However, such post hoc explanations are troubling because they cannot be faithful to what the original model computes—otherwise, there would be no need to use that black box model. A promising alternative is simple, inherently interpretable models (e.g., simple decision trees), which can match the performance of opaque AI systems. Because interpretable models represent—by design—faithful explanations of themselves, they empower informed decisions about whether to trust them. We connect research on XAI and inherently interpretable AI with that on behavioral science and boosts for competences. This perspective suggests that both interpretable AI and XAI could boost people’s competences to critically evaluate AI systems and their ability to make accurate judgments (e.g., medical diagnoses) in the absence of any AI support. Furthermore, we propose how to empirically assess whether and how AI support fosters such competences. Our theoretical analysis suggests that interpretable AI models are particularly promising and—because of XAI’s drawbacks—preferable. Finally, we argue that explaining large language models (LLMs) faces similar challenges as XAI for supervised machine learning and that the gist of our conjectures also holds for LLMs.
🌟🤖📝 **Boosting human competences with interpretable and explainable artificial intelligence**
How can AI *boost* human decision-making instead of replacing it? We talk about this in our new paper.
doi.org/10.1037/dec0...
#AI #XAI #InterpretableAI #IAI #boosting #competences
🧵👇
The cloud giants can and should improve the transparency of their own AI foundation models (and/or of companies such as OpenAI, as major investors). It is not clear that they will do so without strong policy incentives. See Amazon; Google; Microsoft. Greater emphasis on the distribution of IT resources can help to address the digital divide, with due consideration for the risks of adverse digital inclusion. Currently about two-thirds of the world’s population online, and one-third who do not use the internet. Internet use is growing. Some estimates suggest that there will be around a billion more users added in the next five years. See The Cloud vs. On-Prem vs. Hybrid; The Cloud in Context.