π Announcing TabICLv2: State-of-the art Table Foundation Model, fast and open source
A breakthrough for tabular ML: better prediction and faster runtime than alternatives, work by Jingang Qu, David HolzmΓΌller @dholzmueller.bsky.social , Marine Le Morvan, and myself π
12.02.2026 13:26
π 51
π 11
π¬ 1
π 2
We are excited to see so many diverse ideas around this concept being actively researched and believe that this paradigm could be the βnew waveβ of causal ML. We look forward to many interesting discussions and collaborations in the future!
[7/7]
25.09.2025 09:25
π 0
π 0
π¬ 0
π 0
as well as Vahid Balazadeh and Hamidreza Kamkari (βCausalPFN: Amortized Causal Effect Estimation via In-Context Learningβ) on strong concurrent works applying PFNs to causal inference also accepted to NeurIPS.
[6/7]
25.09.2025 09:25
π 1
π 0
π¬ 1
π 0
Weβd also like to congratulate Anish Dhir and Cristiana Diaconu (βEstimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learningβ),
[5/7]
25.09.2025 09:25
π 0
π 0
π¬ 1
π 0
Do-PFN can be used to answer interventional questions such as: βwhat is the effect of a certain medication?β. We demonstrate through extensive experiments that Do-PFN is a highly effective method whose working principles could transform for the whole field of causal machine learning.
[4/7]
25.09.2025 09:25
π 1
π 0
π¬ 1
π 0
Do-PFN is a Prior-Data-Fitted network (PFN) for causal effect estimation. Based on TabPFN, Do-PFN is trained on millions of synthetic datasets and learns to predict causal effects from real-world observational studies.
[3/7]
25.09.2025 09:25
π 1
π 0
π¬ 1
π 0
This is joint work with Siyuan Guo, Noah Hollmann, Frank Hutter, and Bernhard SchΓΆlkopf from the University of Freiburg, the MPI for Intelligent Systems, the University of Cambridge, ELLIS Institute TΓΌbingen and Prior Labs. Weβd like to thank our amazing team for making this project possible.
[2/7]
25.09.2025 09:25
π 0
π 0
π¬ 1
π 0
Do-PFN: In-Context Learning for Causal Effect Estimation
Estimation of causal effects is critical to a range of scientific disciplines. Existing methods for this task either require interventional data, knowledge about the ground truth causal graph, or rely...
Jake @jakemrobertson.bsky.social and I are super excited to share that our paper βDo-PFN: In-Context Learning for Causal Effect Estimationβ has been accepted at NeurIPS as a spotlight!
Check out our pre-print on arXiv and stay tuned for the updated version: arxiv.org/abs/2506.06039
[1/7]
25.09.2025 09:25
π 7
π 0
π¬ 1
π 0
New preprint: SBI with foundation models!
Tired of training or tuning your inference network, or waiting for your simulations to finish? Our method NPE-PF can help: It provides training-free simulation-based inference, achieving competitive performance with orders of magnitude fewer simulations! β‘οΈ
23.07.2025 14:27
π 23
π 9
π¬ 1
π 2
ICML Poster Can Transformers Learn Full Bayesian Inference in Context?ICML 2025
Can transformers learn full Bayesian inference in context? π€
π Come find out and visit our poster E-1205 in the East Hall this morning at #ICML2025 presented by @arikreuter.bsky.social
icml.cc/virtual/2025...
16.07.2025 14:47
π 5
π 1
π¬ 0
π 0
Compute is increasing much faster than data. How can we improve classical supervised learning long term (the underlying tech of most of GenAI)?
Our ICML position paper's answer: simply train on a bunch of artificial data (noise) and only do inference on real-world data! 1/n
08.07.2025 20:03
π 9
π 3
π¬ 1
π 0
How does it relate to TabPFN?
Do-PFN is based on the same principles as TabPFN and thus directly inherits its strengths. While TabPFN is state-of-the-art for making predictions, Do-PFN excels at inferring causal effects. [7/8]
10.06.2025 09:33
π 0
π 0
π¬ 1
π 0
Why is it different?
Do-PFN is a radical new approach to causal inference, replacing standard assumptions of a ground-truth causal model (Pearl) or structural assumptions (Rubin) with a prior over SCMsβour modeling assumptions lie in our synthetic data-generating process. [6/8]
10.06.2025 09:33
π 0
π 1
π¬ 1
π 0
How does it work?
Pre-trained on synthetic data sets drawn from structural causal models (SCMs), Do-PFN learns across millions of causal structures. For each causal structure Do-PFN learns to predict the effect of causal interventions based on simulated interventions. [5/8]
10.06.2025 09:33
π 0
π 0
π¬ 1
π 0
What is our approach?
Do-PFN is a prior-data-fitted network (PFN) for causal effect estimation. Based on TabPFN, Do-PFN relies solely on observational data and does not require exact knowledge about how all variables related to a causal problem interact. [4/8]
10.06.2025 09:33
π 1
π 0
π¬ 1
π 0
The challenge:
However, due to confounding factors and small sample sizes, causal information is difficult to extract from observational data without strict additional assumptions such as a known, fixed causal graph or the unconfoundedness assumption. [3/8]
10.06.2025 09:33
π 0
π 0
π¬ 1
π 0
The premise:
Causal questions, such as βWhat will be the effect of a medication?β are typically addressed in carefully conducted experiments. While controlled experiments can be expensive or even impossible, passively observed data is often readily available. [2/8]
10.06.2025 09:33
π 0
π 0
π¬ 1
π 0
We present a new approach to causal inference. Pre-trained on synthetic data, Do-PFN opens the door to a new domain: PFNs for causal inferenceβwe are excited to announce our new paper βDo-PFN: In-Context Learning for Causal Effect Estimationβ on Arxiv! π¨π
A thread: π§΅[1/8]
10.06.2025 09:33
π 6
π 3
π¬ 1
π 0
It seems that we have 3 accepted papers at ICML 2025 π₯
01.05.2025 12:32
π 14
π 2
π¬ 5
π 0
Arriving in Singapore this afternoon π¬ I'll attend #ICLR2025, #AABI2025, and #AISTATS2025 together with many of my students and collaborators to present our 2 orals, 5 posters, and 14 workshop contributions π
Feel free to drop by!
23.04.2025 03:14
π 11
π 5
π¬ 1
π 0
BREAKING: People are being suspended on X in Turkey for posting videos of these protests against ErdoΔanβs corrupt and repressive regime.
Keep sharing everywhere.
23.03.2025 12:20
π 11159
π 6601
π¬ 321
π 344
Great paper!
07.02.2025 10:00
π 1
π 0
π¬ 0
π 0