This is joint work with Subhadip Mukherjee, Lindon Roberts, and Matthias J. Ehrhardt
#MachineLearning #Optimisation #Imaging #Bilevel_Learning
This is joint work with Subhadip Mukherjee, Lindon Roberts, and Matthias J. Ehrhardt
#MachineLearning #Optimisation #Imaging #Bilevel_Learning
Our numerical experiments show:
π Significant speed-ups and better performance in image denoising and deblurring compared to the Method of Adaptive Inexact Descent (MAID).
Read the full preprint here:
arxiv.org/abs/2412.12049
Why does this matter?
π Insights into the behaviour of inexact stochastic gradients in bilevel problems, with practical assumptions and convergence results.
β
Faster performance vs. adaptive deterministic bilevel methods.
β
Better generalisation for imaging tasks.
In this work, we make a theoretical contribution by connecting stochastic approximate hypergradients in bilevel optimisation to the theory of stochastic nonconvex optimisation.
Under mild assumptions, we prove these hypergradients satisfy the Biased ABC assumption for SGD.
Are you interested in bilevel learning for tasks like learning data-adaptive regularisers (e.g., Field of Experts) or optimising forward operators (e.g., undersampled MRI) in variational regularisation on large datasets?
Check out our latest preprint! π§΅
arxiv.org/abs/2412.12049
This is a joint work with Lea Bogensperger, Matthias J. Ehrhardt, Thomas Pock, and Hok Shing Wong.
π Below, see how the learned ICNN regulariser performs on a sparse-angle computed tomography problem. Our bilevel framework shows significant improvement compared to adversarial training-based methods previously introduced in the literature.
β¨ Key Highlights:
β’A-posteriori error bounds for inexact hypergradients computed by primal-dual style differentiation.
β’Adaptive, convergent bilevel framework with primal-dual style differentiation.
β’Application to learning data-adaptive regularisers (e.g., ICNNs)!
π Are Primal-Dual methods like PDHG your go-to for imaging tasks?
π‘ Interested in using them within a bilevel framework to learn data-adaptive regularisers like input-convex neural networks (ICNNs)?
Check out our latest preprint:Β
arxiv.org/abs/2412.06436