ProbML 2026 (formerly AABI) invites submissions on probabilistic ML (both Bayesian and otherwise!), July 5 in Seoul (co-located with ICML). Website: probml.cc. Tracks: proceedings (PMLR), workshop, fast track. New focus includes applications in healthcare and climate!
Submit by: 20 March 2026.
10.02.2026 13:39
π 24
π 15
π¬ 1
π 7
Performative Learning Theory
Performative predictions influence the very outcomes they aim to forecast. We study performative predictions that affect a sample (e.g., only existing users of an app) and/or the whole population (e.g...
Predictions can affect the thing they aim to predict. Think of self-fulfilling economic forecasts or self-negating routing (Google maps etc.). Can we learn from samples that reacted to our model? And what if the ground truth also reacts?
πAnswersπ
arxiv.org/abs/2602.04402
w/ @krikamol.bsky.social
06.02.2026 10:41
π 1
π 1
π¬ 0
π 0
π¨ Paper Alert: Empirical Decision Theory π¨
We introduce βEmpirical Decision Theoryβ, a framework for decision making that is radically empirical: Instead of positing an abstract state space, we work with observed actβconsequence protocols only. Check it out!
πhttps://arxiv.org/abs/2512.05677π
16.12.2025 08:40
π 0
π 0
π¬ 0
π 0
Reminder that this is today at 7 pm! Please join us if you are at #NeurIPS2025
03.12.2025 22:31
π 8
π 4
π¬ 0
π 0
π Iβm at @neuripsconf.bsky.social in San Diego π΄Hit me up if youβd like to talk imprecise probabilities, learning theory, good old statistics, or simply anything Bayesian π€ #NeurIPS2025 #NeurIPS #NeurIPS25
03.12.2025 00:39
π 3
π 0
π¬ 0
π 0
I had a great time visiting the @Stanford
Trustworthy AI lab (stairlab.stanford.edu) last week. My talk on empirical AI alignment (lnkd.in/d_SxW4YP) and incentive-aware AI regulation sparked a lot of fruitful discussionπ‘
Now off to #NeurIPS2025 in San Diego. π΄
01.12.2025 06:51
π 4
π 0
π¬ 0
π 0
Something worrying me: many seem to change their research direction out of FOMO, reacting to (the obvious) recent trend: "If I don't do this, someone else will do it!"
One of the key perks we have in academia is the freedom to set our own agenda.* If someone else canβand WILLβdo it, why would you?
29.11.2025 00:51
π 48
π 6
π¬ 6
π 0
π Honored to receive the IJAR Young Researcher Award (π₯)
The award is given to researchers who demonstrate excellence at an early stage of their scientific careers, sponsored by the International Journal of Approximate Reasoning (IJAR).
isipta25.sipta.org/awards
24.07.2025 07:21
π 2
π 1
π¬ 0
π 0
π£οΈ Reviewer feedback on our paper:
βPractical relevance with a well-motivated human-in-the-loop use case.β
βA pleasure to read... good figures, insightful explanations.β
βNovel integration of Shapley values into Bayesian Optimization for interpretability.β
10.06.2025 13:01
π 2
π 0
π¬ 1
π 0
π¦Ώ Exosuits support the lower back during physical labor β like pallet building.
π½οΈ See them in action:
10.06.2025 13:01
π 1
π 0
π¬ 1
π 0
π‘ Why does this matter?
Because it helps personalize wearable robotic exosuits more efficiently β by enabling human users to understand and intervene in the process.
10.06.2025 13:01
π 0
π 0
π¬ 1
π 0
π§ We open up the black box using Shapley values β to explain why Bayesian Optimization chooses specific parameters.
10.06.2025 13:01
π 0
π 0
π¬ 1
π 0
Bayesian Optimization is great for black box optimization β but often a black box itself.
π Why does it pick the parameters it does?
We wanted to find out.π
10.06.2025 13:00
π 0
π 0
π¬ 1
π 0
Glad to share this paper was accepted at @ecmlpkdd.org !
We show interpreting BayesOpt helps personalize soft exosuits (see picture)
Great collaboation w/ Fede Croppi, @giuseppe88.bsky.social et al.
@lmumuenchen.bsky.social @munichcenterml.bsky.social
@harvard.edu @wyssinstitute.bsky.social
10.06.2025 12:59
π 0
π 1
π¬ 1
π 0
Guaranteed confidence-band enclosures for PDE surrogates
We propose a method for obtaining statistically guaranteed confidence bands for functional machine learning techniques: surrogate models which map between function spaces, motivated by the need build ...
Getting coverage guarantees over functional surrogate models? This is what A. Gray and V. Gopakumar (they did the heavy lifting) have done using conformal predictions and zonotopes over reduced dimensions.
It will be present at UAI 2025 (@auai.org), but a preview is here: arxiv.org/abs/2501.18426.
07.05.2025 14:56
π 4
π 2
π¬ 0
π 0
SIPTA Seminar by Krikamol Muandet: Imprecise generalisation
YouTube video by Imprecise Probabilities Channel of SIPTA
In case you are interested in very recent advances of learning under imprecision with probability sets, you can check out the recent and fantastic SIPTA virtual talk by @krikamol.bsky.social, now on youtube: www.youtube.com/watch?v=gGPF...
26.04.2025 08:32
π 3
π 2
π¬ 0
π 0
Helen Alber's DAGStat 2025 talk yesterday showed how LLMs streamline sentence classification and how to fix their mistakes.
π‘ The solution? A two-step approach: LLMs pre-classify, experts refine, and Sim-SIMEX corrects errors, boosting efficiency and handling imbalanced data as well as MC-SIMEX.
27.03.2025 11:48
π 3
π 1
π¬ 0
π 0
NeurIPS Poster Statistical Multicriteria Benchmarking via the GSD-FrontNeurIPS 2024
If you're attending #NeurIPS2024, don't miss our spotlight poster on multicriteria benchmarking TODAY 11am in West Ballroom A-D
Talk: neurips.cc/virtual/2024...
Paper: openreview.net/pdf?id=jXxvS...
Where? West Ballroom A-D #6501
When? Thu 12 Dec 11 a.m. - 2 p.m. local time
#neurips #neurips24
12.12.2024 16:18
π 4
π 0
π¬ 0
π 0
#MachineLearning is all about learning parameters from data, right? Well, not quiteβ¦
π€ΈββοΈActually, it sometimes works the other way around.
π€Curious how?
πCheck out our poster on reciprocal learning: neurips.cc/virtual/2024...
@neuripsconf.bsky.social #NeurIPS2024 #NeurIPS
12.12.2024 15:09
π 1
π 0
π¬ 0
π 0