Sadegh Salehi's Avatar

Sadegh Salehi

@sadeghsalehi

Researching optimisation, machine learning, inverse problems, and computer vision

16
Followers
49
Following
9
Posts
14.12.2024
Joined
Posts Following

Latest posts by Sadegh Salehi @sadeghsalehi

This is joint work with Subhadip Mukherjee, Lindon Roberts, and Matthias J. Ehrhardt

#MachineLearning #Optimisation #Imaging #Bilevel_Learning

17.12.2024 16:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Bilevel Learning with Inexact Stochastic Gradients Bilevel learning has gained prominence in machine learning, inverse problems, and imaging applications, including hyperparameter optimization, learning data-adaptive regularizers, and optimizing forwa...

Our numerical experiments show:

πŸ“ˆ Significant speed-ups and better performance in image denoising and deblurring compared to the Method of Adaptive Inexact Descent (MAID).

Read the full preprint here:

arxiv.org/abs/2412.12049

17.12.2024 16:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Why does this matter?

πŸ” Insights into the behaviour of inexact stochastic gradients in bilevel problems, with practical assumptions and convergence results.
βœ… Faster performance vs. adaptive deterministic bilevel methods.
βœ… Better generalisation for imaging tasks.

17.12.2024 16:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In this work, we make a theoretical contribution by connecting stochastic approximate hypergradients in bilevel optimisation to the theory of stochastic nonconvex optimisation.

Under mild assumptions, we prove these hypergradients satisfy the Biased ABC assumption for SGD.

17.12.2024 16:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Bilevel Learning with Inexact Stochastic Gradients Bilevel learning has gained prominence in machine learning, inverse problems, and imaging applications, including hyperparameter optimization, learning data-adaptive regularizers, and optimizing forwa...

Are you interested in bilevel learning for tasks like learning data-adaptive regularisers (e.g., Field of Experts) or optimising forward operators (e.g., undersampled MRI) in variational regularisation on large datasets?

Check out our latest preprint! 🧡

arxiv.org/abs/2412.12049

17.12.2024 16:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is a joint work with Lea Bogensperger, Matthias J. Ehrhardt, Thomas Pock, and Hok Shing Wong.

14.12.2024 20:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ“Š Below, see how the learned ICNN regulariser performs on a sparse-angle computed tomography problem. Our bilevel framework shows significant improvement compared to adversarial training-based methods previously introduced in the literature.

14.12.2024 20:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

✨ Key Highlights:
β€’A-posteriori error bounds for inexact hypergradients computed by primal-dual style differentiation.
β€’Adaptive, convergent bilevel framework with primal-dual style differentiation.
β€’Application to learning data-adaptive regularisers (e.g., ICNNs)!

14.12.2024 20:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
An Adaptively Inexact Method for Bilevel Learning Using Primal-Dual Style Differentiation We consider a bilevel learning framework for learning linear operators. In this framework, the learnable parameters are optimized via a loss function that also depends on the minimizer of a convex opt...

πŸ” Are Primal-Dual methods like PDHG your go-to for imaging tasks?
πŸ’‘ Interested in using them within a bilevel framework to learn data-adaptive regularisers like input-convex neural networks (ICNNs)?

Check out our latest preprint:Β 

arxiv.org/abs/2412.06436

14.12.2024 20:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0