Tommy Rochussen's Avatar

Tommy Rochussen

@rochussen

Doctoral researcher at Helmholtz AI supervised by Vincent Fortuin. University of Cambridge engineering graduate. Probabilistic machine learning. sheev13.github.io

481
Followers
54
Following
18
Posts
20.11.2024
Joined
Posts Following

Latest posts by Tommy Rochussen @rochussen

Check out the paper πŸ‘‰ arxiv.org/pdf/2602.087...

Looking forward to presenting this work in Rio, and many thanks to @vincefort.bsky.social for his supervision!

11.02.2026 08:59 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Are humble Gaussian priors enough for BNNs to model highly complex stochastic processes? Do well-specified BNN priors remove the need for more costly approximate inference algorithms?

We provide answers in the paper!

11.02.2026 08:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

2. It turns BNNs into flexible generative models (i.e., sampling from learned priors.

3. It enables capabilities that have been difficult for neural processes so far, including:
β€’ Within-task minibatching
β€’ Meta-learning in extremely data-scarce regimes.

11.02.2026 08:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Why this matters:

1. It lets us study BNNs under well-specified, data-driven priors rather than the usual isotropic guff.

11.02.2026 08:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

3. The resulting model can be viewed as a neural process whose latent variable is the weights of a BNN, with the network itself acting as the decoder.

11.02.2026 08:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

2. This is achieved via per-dataset amortised variational inference, allowing the model to infer dataset-specific posteriors while learning a shared, well-specified prior.

11.02.2026 08:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

What we do:

1. We propose a way to learn a prior over neural network weights from data, using a collection of related datasets.

11.02.2026 08:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Bayesian neural network (BNN) practitioners have to specify priors over weights, but doing so is often unclear or ad hoc. In this paper, we bridge Bayesian deep learning and probabilistic meta-learning to offer a concrete answer.

11.02.2026 08:59 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The work tackles a fairly fundamental question in Bayesian deep learning:

"how can we be Bayesian if we don’t have any meaningful prior beliefs in the first place?"

11.02.2026 08:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I’m pleased to share that our latest paper, β€œAmortising Inference and Meta-Learning Priors in Neural Networks”, has been accepted to ICLR 2026 in Rio!

11.02.2026 08:59 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Are bitterns as fiendishly difficult to spot in Singapore as they are in Europe?

24.01.2026 12:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Arxiv link: arxiv.org/pdf/2504.01650

It’s nice to be able to get the ball rolling on my PhD with this paper, and a nice achievement to have published my first non-workshop paper. A big thanks to @vincefort.bsky.social for his supervision on this project!

17.04.2025 09:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

1.) you want/need GP levels of interpretability
2.) you don’t have that many training tasks, so need SOTA data efficiency (at the meta-level)
3.) you have accurate domain knowledge (in GP-prior form)
4.) each task has too many observations for exact GP inference

17.04.2025 09:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

If you need probabilistic predictions across multiple related tasks/datasets, you should use this model if any combination of the following hold:

17.04.2025 09:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We introduce the ability to meta-learn sparse variational Gaussian process inference, resulting in a new type of neural process that is amenable to prior elicitation.

17.04.2025 09:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Very pleased to share that our new paper β€œSparse Gaussian Neural Processes” has been accepted under the proceedings track at AABI 2025! πŸŽ‰ (1/n)

17.04.2025 09:19 πŸ‘ 7 πŸ” 0 πŸ’¬ 2 πŸ“Œ 1

I've seen things you people wouldn't believe.

Attacks from reviewers on fire off the shoulders of #OpenReview.

I watched logic fallacies glitter in the dark near @iclr-conf.bsky.social

All those moments will be lost in time, like tears in the next resubmission.β€―

Time to die.

#ML #Ai #PhDlife

26.11.2024 08:52 πŸ‘ 16 πŸ” 3 πŸ’¬ 2 πŸ“Œ 2

πŸ™‹β€β™‚οΈ

20.11.2024 12:51 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Thanks for putting this together - keen to be added!

20.11.2024 12:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0