Arik Reuter's Avatar

Arik Reuter

@arikreuter

University of Cambridge and Max Planck Institute for Intelligent Systems I'm interested in amortized inference/PFNs/in-context learning for challenging probabilistic and causal problems. https://arikreuter.github.io/

87
Followers
159
Following
16
Posts
22.11.2024
Joined
Posts Following

Latest posts by Arik Reuter @arikreuter

Post image Post image

πŸŽ‰ Announcing TabICLv2: State-of-the art Table Foundation Model, fast and open source

A breakthrough for tabular ML: better prediction and faster runtime than alternatives, work by Jingang Qu, David HolzmΓΌller @dholzmueller.bsky.social , Marine Le Morvan, and myself πŸ‘‡

12.02.2026 13:26 πŸ‘ 51 πŸ” 11 πŸ’¬ 1 πŸ“Œ 2

We are excited to see so many diverse ideas around this concept being actively researched and believe that this paradigm could be the β€œnew wave” of causal ML. We look forward to many interesting discussions and collaborations in the future!

[7/7]

25.09.2025 09:25 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

as well as Vahid Balazadeh and Hamidreza Kamkari (β€œCausalPFN: Amortized Causal Effect Estimation via In-Context Learning”) on strong concurrent works applying PFNs to causal inference also accepted to NeurIPS.

[6/7]

25.09.2025 09:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We’d also like to congratulate Anish Dhir and Cristiana Diaconu (β€œEstimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learning”),

[5/7]

25.09.2025 09:25 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Do-PFN can be used to answer interventional questions such as: β€œwhat is the effect of a certain medication?”. We demonstrate through extensive experiments that Do-PFN is a highly effective method whose working principles could transform for the whole field of causal machine learning.

[4/7]

25.09.2025 09:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Do-PFN is a Prior-Data-Fitted network (PFN) for causal effect estimation. Based on TabPFN, Do-PFN is trained on millions of synthetic datasets and learns to predict causal effects from real-world observational studies.

[3/7]

25.09.2025 09:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is joint work with Siyuan Guo, Noah Hollmann, Frank Hutter, and Bernhard SchΓΆlkopf from the University of Freiburg, the MPI for Intelligent Systems, the University of Cambridge, ELLIS Institute TΓΌbingen and Prior Labs. We’d like to thank our amazing team for making this project possible.

[2/7]

25.09.2025 09:25 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Do-PFN: In-Context Learning for Causal Effect Estimation Estimation of causal effects is critical to a range of scientific disciplines. Existing methods for this task either require interventional data, knowledge about the ground truth causal graph, or rely...

Jake @jakemrobertson.bsky.social and I are super excited to share that our paper β€œDo-PFN: In-Context Learning for Causal Effect Estimation” has been accepted at NeurIPS as a spotlight!

Check out our pre-print on arXiv and stay tuned for the updated version: arxiv.org/abs/2506.06039

[1/7]

25.09.2025 09:25 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

New preprint: SBI with foundation models!
Tired of training or tuning your inference network, or waiting for your simulations to finish? Our method NPE-PF can help: It provides training-free simulation-based inference, achieving competitive performance with orders of magnitude fewer simulations! ⚑️

23.07.2025 14:27 πŸ‘ 23 πŸ” 9 πŸ’¬ 1 πŸ“Œ 2
ICML Poster Can Transformers Learn Full Bayesian Inference in Context?ICML 2025

Can transformers learn full Bayesian inference in context? πŸ€”

πŸ‘‰ Come find out and visit our poster E-1205 in the East Hall this morning at #ICML2025 presented by @arikreuter.bsky.social

icml.cc/virtual/2025...

16.07.2025 14:47 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Compute is increasing much faster than data. How can we improve classical supervised learning long term (the underlying tech of most of GenAI)?

Our ICML position paper's answer: simply train on a bunch of artificial data (noise) and only do inference on real-world data! 1/n

08.07.2025 20:03 πŸ‘ 9 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
Preview
Do-PFN: In-Context Learning for Causal Effect Estimation Estimation of causal effects is critical to a range of scientific disciplines. Existing methods for this task either require interventional data, knowledge about the ground truth causal graph, or rely...

This is joint work with Jake Robertson @jakemrobertson.bsky.social (shared) Siyuan Guo, Noah Hollmann, Frank Hutter, and Bernhard SchΓΆlkopf

Checkout the paper at: arxiv.org/abs/2506.06039

[8/8]

10.06.2025 09:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

How does it relate to TabPFN?

Do-PFN is based on the same principles as TabPFN and thus directly inherits its strengths. While TabPFN is state-of-the-art for making predictions, Do-PFN excels at inferring causal effects. [7/8]

10.06.2025 09:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Why is it different?

Do-PFN is a radical new approach to causal inference, replacing standard assumptions of a ground-truth causal model (Pearl) or structural assumptions (Rubin) with a prior over SCMsβ€”our modeling assumptions lie in our synthetic data-generating process. [6/8]

10.06.2025 09:33 πŸ‘ 0 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

How does it work?

Pre-trained on synthetic data sets drawn from structural causal models (SCMs), Do-PFN learns across millions of causal structures. For each causal structure Do-PFN learns to predict the effect of causal interventions based on simulated interventions. [5/8]

10.06.2025 09:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

What is our approach?

Do-PFN is a prior-data-fitted network (PFN) for causal effect estimation. Based on TabPFN, Do-PFN relies solely on observational data and does not require exact knowledge about how all variables related to a causal problem interact. [4/8]

10.06.2025 09:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The challenge:

However, due to confounding factors and small sample sizes, causal information is difficult to extract from observational data without strict additional assumptions such as a known, fixed causal graph or the unconfoundedness assumption. [3/8]

10.06.2025 09:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The premise:

Causal questions, such as β€œWhat will be the effect of a medication?” are typically addressed in carefully conducted experiments. While controlled experiments can be expensive or even impossible, passively observed data is often readily available. [2/8]

10.06.2025 09:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We present a new approach to causal inference. Pre-trained on synthetic data, Do-PFN opens the door to a new domain: PFNs for causal inferenceβ€”we are excited to announce our new paper β€œDo-PFN: In-Context Learning for Causal Effect Estimation” on Arxiv! πŸ”¨πŸ”

A thread: 🧡[1/8]

10.06.2025 09:33 πŸ‘ 6 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

It seems that we have 3 accepted papers at ICML 2025 πŸ”₯

01.05.2025 12:32 πŸ‘ 14 πŸ” 2 πŸ’¬ 5 πŸ“Œ 0
Post image

Arriving in Singapore this afternoon πŸ›¬ I'll attend #ICLR2025, #AABI2025, and #AISTATS2025 together with many of my students and collaborators to present our 2 orals, 5 posters, and 14 workshop contributions πŸš€

Feel free to drop by!

23.04.2025 03:14 πŸ‘ 11 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

BREAKING: People are being suspended on X in Turkey for posting videos of these protests against Erdoğan’s corrupt and repressive regime.

Keep sharing everywhere.

23.03.2025 12:20 πŸ‘ 11159 πŸ” 6601 πŸ’¬ 321 πŸ“Œ 344

Great paper!

07.02.2025 10:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0