Julian Rodemann's Avatar

Julian Rodemann

@jurodemann

http://www.julian-rodemann.de | PhD student in statistics @LMU_Muenchen | currently @HarvardStats

87
Followers
288
Following
14
Posts
22.11.2024
Joined
Posts Following

Latest posts by Julian Rodemann @jurodemann

Post image

ProbML 2026 (formerly AABI) invites submissions on probabilistic ML (both Bayesian and otherwise!), July 5 in Seoul (co-located with ICML). Website: probml.cc. Tracks: proceedings (PMLR), workshop, fast track. New focus includes applications in healthcare and climate!
Submit by: 20 March 2026.

10.02.2026 13:39 πŸ‘ 24 πŸ” 15 πŸ’¬ 1 πŸ“Œ 7
Preview
Performative Learning Theory Performative predictions influence the very outcomes they aim to forecast. We study performative predictions that affect a sample (e.g., only existing users of an app) and/or the whole population (e.g...

Predictions can affect the thing they aim to predict. Think of self-fulfilling economic forecasts or self-negating routing (Google maps etc.). Can we learn from samples that reacted to our model? And what if the ground truth also reacts?
πŸ‘‡AnswersπŸ‘‡
arxiv.org/abs/2602.04402
w/ @krikamol.bsky.social

06.02.2026 10:41 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

🚨 Paper Alert: Empirical Decision Theory 🚨

We introduce β€œEmpirical Decision Theory”, a framework for decision making that is radically empirical: Instead of positing an abstract state space, we work with observed act–consequence protocols only. Check it out!

πŸ‘‰https://arxiv.org/abs/2512.05677πŸ‘ˆ

16.12.2025 08:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Reminder that this is today at 7 pm! Please join us if you are at #NeurIPS2025

03.12.2025 22:31 πŸ‘ 8 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ‘‹ I’m at @neuripsconf.bsky.social in San Diego 🌴Hit me up if you’d like to talk imprecise probabilities, learning theory, good old statistics, or simply anything Bayesian πŸ€“ #NeurIPS2025 #NeurIPS #NeurIPS25

03.12.2025 00:39 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

I had a great time visiting the @Stanford
Trustworthy AI lab (stairlab.stanford.edu) last week. My talk on empirical AI alignment (lnkd.in/d_SxW4YP) and incentive-aware AI regulation sparked a lot of fruitful discussionπŸ’‘

Now off to #NeurIPS2025 in San Diego. 🌴

01.12.2025 06:51 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Something worrying me: many seem to change their research direction out of FOMO, reacting to (the obvious) recent trend: "If I don't do this, someone else will do it!"

One of the key perks we have in academia is the freedom to set our own agenda.* If someone else canβ€”and WILLβ€”do it, why would you?

29.11.2025 00:51 πŸ‘ 48 πŸ” 6 πŸ’¬ 6 πŸ“Œ 0
Post image

πŸŽ‰ Honored to receive the IJAR Young Researcher Award (πŸ₯ˆ)

The award is given to researchers who demonstrate excellence at an early stage of their scientific careers, sponsored by the International Journal of Approximate Reasoning (IJAR).

isipta25.sipta.org/awards

24.07.2025 07:21 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration Bayesian optimization (BO) with Gaussian processes (GP) has become an indispensable algorithm for black box optimization problems. Not without a dash of irony, BO is often considered a black box itsel...

Read the paper here:
πŸ“„ arxiv.org/abs/2403.04629
#MachineLearning #BayesianOptimization #XAI #HumanInTheLoop #Exosuits #ShapleyValues

10.06.2025 13:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ—£οΈ Reviewer feedback on our paper:

β€œPractical relevance with a well-motivated human-in-the-loop use case.”

β€œA pleasure to read... good figures, insightful explanations.”

β€œNovel integration of Shapley values into Bayesian Optimization for interpretability.”

10.06.2025 13:01 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

🦿 Exosuits support the lower back during physical labor β€” like pallet building.
πŸ“½οΈ See them in action:

10.06.2025 13:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ’‘ Why does this matter?

Because it helps personalize wearable robotic exosuits more efficiently β€” by enabling human users to understand and intervene in the process.

10.06.2025 13:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

🧠 We open up the black box using Shapley values β€” to explain why Bayesian Optimization chooses specific parameters.

10.06.2025 13:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Bayesian Optimization is great for black box optimization β€” but often a black box itself.

πŸ” Why does it pick the parameters it does?

We wanted to find out.πŸ‘‡

10.06.2025 13:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Glad to share this paper was accepted at @ecmlpkdd.org !

We show interpreting BayesOpt helps personalize soft exosuits (see picture)

Great collaboation w/ Fede Croppi, @giuseppe88.bsky.social et al.
@lmumuenchen.bsky.social @munichcenterml.bsky.social
@harvard.edu @wyssinstitute.bsky.social

10.06.2025 12:59 πŸ‘ 0 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Guaranteed confidence-band enclosures for PDE surrogates We propose a method for obtaining statistically guaranteed confidence bands for functional machine learning techniques: surrogate models which map between function spaces, motivated by the need build ...

Getting coverage guarantees over functional surrogate models? This is what A. Gray and V. Gopakumar (they did the heavy lifting) have done using conformal predictions and zonotopes over reduced dimensions.

It will be present at UAI 2025 (@auai.org), but a preview is here: arxiv.org/abs/2501.18426.

07.05.2025 14:56 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
SIPTA Seminar by Krikamol Muandet: Imprecise generalisation
SIPTA Seminar by Krikamol Muandet: Imprecise generalisation YouTube video by Imprecise Probabilities Channel of SIPTA

In case you are interested in very recent advances of learning under imprecision with probability sets, you can check out the recent and fantastic SIPTA virtual talk by @krikamol.bsky.social, now on youtube: www.youtube.com/watch?v=gGPF...

26.04.2025 08:32 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

Helen Alber's DAGStat 2025 talk yesterday showed how LLMs streamline sentence classification and how to fix their mistakes.

πŸ’‘ The solution? A two-step approach: LLMs pre-classify, experts refine, and Sim-SIMEX corrects errors, boosting efficiency and handling imbalanced data as well as MC-SIMEX.

27.03.2025 11:48 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
NeurIPS Poster Statistical Multicriteria Benchmarking via the GSD-FrontNeurIPS 2024

If you're attending #NeurIPS2024, don't miss our spotlight poster on multicriteria benchmarking TODAY 11am in West Ballroom A-D

Talk: neurips.cc/virtual/2024...

Paper: openreview.net/pdf?id=jXxvS...

Where? West Ballroom A-D #6501

When? Thu 12 Dec 11 a.m. - 2 p.m. local time

#neurips #neurips24

12.12.2024 16:18 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

#MachineLearning is all about learning parameters from data, right? Well, not quite…

πŸ€Έβ€β™‚οΈActually, it sometimes works the other way around.

πŸ€”Curious how?

πŸ‘‰Check out our poster on reciprocal learning: neurips.cc/virtual/2024...

@neuripsconf.bsky.social #NeurIPS2024 #NeurIPS

12.12.2024 15:09 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0