Daolang Huang's Avatar

Daolang Huang

@huangdaolang

PhD student at Aalto University ๐Ÿ‡ซ๐Ÿ‡ฎ Probabilistic ML, amortized inference. See more at huangdaolang.com

301
Followers
259
Following
17
Posts
27.11.2024
Joined
Posts Following

Latest posts by Daolang Huang @huangdaolang

8/n I am at #NeurIPS2025 this week. Our poster session is Wed Dec 3, 11:00 - 14:00. Come say hi if youโ€™d like to chat about amortized inference!

03.12.2025 05:32 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

7/n Joint work with @wenxinyi.bsky.social @ayushbharti.bsky.social @samikaski.bsky.social @lacerbi.bsky.social and supported by ELLIS Institute Finland, FCAI, Aalto University, University of Helsinki. Huge thanks for the great environment and support.

03.12.2025 05:32 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Video thumbnail

6/n Experiments on active learning benchmarks, classical BED tasks (location-finding, CES), and a psychometric model show ALINE achieves fast, accurate inference together with efficient and flexible data acquisition.

03.12.2025 05:32 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

5/n We train ALINE with reinforcement learning using a self-estimated information-gain reward, so the model learns to ask the questions that most improve its own beliefs.

03.12.2025 05:32 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

4/n ALINE is built on a Transformer neural process with two specialized heads: an inference head that amortizes the posterior & predictive, and a policy head that proposes the next query point given the current dataset.

03.12.2025 05:32 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

3/n In this paper, we propose ALINE, which can
โ€ข instantly selects the next informative data point to query
โ€ข performs amortized Bayesian inference over parameters & predictions
โ€ข switches targets at runtime without retraining.

03.12.2025 05:32 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

2/n Many tasks, from scientific discovery to medical diagnosis, require us to rapidly decide which data to acquire next to maximize our knowledge, and also rapidly learn about the unknown factors based on the collected data. Existing methods usually focus on amortizing one thing.

03.12.2025 05:32 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

We jointly amortize Bayesian inference and active data acquisition within a single architecture.

Excited to share our #NeurIPS2025 โœจSpotlightโœจ paper โ€œALINE: Joint Amortization for Bayesian Inference and Active Data Acquisitionโ€!

www.huangdaolang.com/aline/

03.12.2025 05:32 ๐Ÿ‘ 10 ๐Ÿ” 5 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1

@lacerbi.bsky.social wrote a very nice summary post of our paper here if anyone missed it:

bsky.app/profile/lace...

I can give some more behind-the-scenes information. ๐Ÿงต

25.03.2025 12:31 ๐Ÿ‘ 7 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

1/ Introducing ACE (Amortized Conditioning Engine)! Our new AISTATS 2025 paper presents a transformer framework that unifies tasks from image completion to BayesOpt & simulator-based inference under *one* probabilistic conditioning approach. It's Bayes all the way down!

06.03.2025 10:32 ๐Ÿ‘ 35 ๐Ÿ” 14 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 3
Post image

Interested in amortization + experimental design + decision making? #NeurIPS2024
Come by to our poster, starting soon (11-2, East Hall)!

NeurIPS link: neurips.cc/virtual/2024...
Paper: openreview.net/forum?id=zBG...

with @huangdaolang.bsky.social Yujia Guo @samikaski.bsky.social

13.12.2024 17:53 ๐Ÿ‘ 15 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
2025 winter landing page โ€” FCAI

I am at #NeurIPS2024 and looking for postdocs. Feel free to reach out if you want to discuss!

We have also other positions: fcai.fi/winter-calls.... My slightly outdated home page is: kaski-lab.com

09.12.2024 21:18 ๐Ÿ‘ 17 ๐Ÿ” 9 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

1/ Hi all, I am at #NeurIPS2024 and I will be hiring a postdoc in probabilistic machine learning starting asap.

Research interests: amortized, approximate & simulator-based inference, Bayesian optimization, and AI4science.

Get in touch for a chat or come to our posters today 11AM or Friday 11AM!

11.12.2024 16:26 ๐Ÿ‘ 25 ๐Ÿ” 11 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1

Great list! Can I join?

06.12.2024 14:07 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

8/ Join us at our poster session at #NeurIPS2024. Unfortunately this year I canโ€™t attend in person, but @lacerbi.bsky.social will present our work. We are excited to discuss and explore future directions in Bayesian experimental design and amortization!

05.12.2024 12:18 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

7/ Experiments show TNDP significantly outperforms traditional methods across various tasks, including targeted active learning and hyperparameter optimization, retrosynthesis planning.

05.12.2024 12:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

6/ Our Transformer Neural Decision Process (TNDP) unifies experimental design and decision-making in a single framework, allowing instant design proposals while maintaining high decision quality.

05.12.2024 12:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

5/ We introduce Decision Utility Gain (DUG) to guide experimental design with a direct focus on optimizing decision-making tasks, moving beyond traditional information-theoretic objectives. DUG measures the improvement in the maximum expected utility from observing a new experimental design.

05.12.2024 12:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

4/ In our work, we present a new amortized BED framework that optimizes experiments directly for downstream decision-making.

05.12.2024 12:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

3/ But what if our goal goes beyond parameter inference? In many real-world tasks like medical diagnosis, we care more about making the right decisions than learning model parameters.

05.12.2024 12:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

2/ Bayesian Experimental Design (BED) is a powerful framework to optimize experiments aimed at reducing uncertainty about unknown system parameters. Recent amortized BED methods use pre-trained neural networks for instant design proposals.

05.12.2024 12:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Optimizing decision utility in Bayesian experimental design is key to improving downstream decision-making.

Excited to share our #NeurIPS2024 paper on Amortized Decision-Aware Bayesian Experimental Design: arxiv.org/abs/2411.02064

@lacerbi.bsky.social @samikaski.bsky.social

Details below.

05.12.2024 12:18 ๐Ÿ‘ 41 ๐Ÿ” 12 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 2
Post image

1/ Excuse me, can I interest you in eliciting your beliefs as flexible probability distributions? No worries, we only need pairwise comparisons or rankings, no personal details.

Led by **Petrus Mikkola** and joint with **Arto Klami**, to be presented soon at @neuripsconf.bsky.social #NeurIPS2024

04.12.2024 09:06 ๐Ÿ‘ 69 ๐Ÿ” 14 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0