Echo-dash: Keeping ecologists in the loop with an open source, online ecoacoustic dashboard for interactive exploration of spatiotemporal soundscape data
doi.org/10.32942/X2T...
Echo-dash: Keeping ecologists in the loop with an open source, online ecoacoustic dashboard for interactive exploration of spatiotemporal soundscape data
doi.org/10.32942/X2T...
π¨Paper now published! π¨
The Effects of External Cue Overlap and Internal Goals on Selective Memory Retrieval.
Grateful for thorough reviews that made it stronger. Out now in #EJN: doi.org/10.1111/ejn..... w @alexamorcom.bsky.social @MattPlummer @ivorsimpson.bsky.social. Updated summaryπ§΅π
For those interested in a PhD using probabilistic machine learning for inverse problems, medical imaging, computer vision or ecological modelling, drop me an email!
Please share with your contacts! Deadline is 19th February.
Email me and/or DSAI_administration@sussex.ac.uk if you have any questions about the process. @sussexai.bsky.social @drtnowotny.bsky.social @wijdr.bsky.social anyone else on here yet?!
Weβve just announced this yearβs call for applications to our Sussex AI PhD studentships at the University of Sussex! You can find all the details www.sussex.ac.uk/study/fees-f.... This year weβve included a few suggested project directions to give some inspirations tinyurl.com/2e3mxbch.
We even had slushies afterwards...
So it turns out that laser tag is an appropriate/popular social activity for an AI research group! Congratulations to the Connect lab of @drtnowotny.bsky.social on their victory in the inaugural AI research laser zone cup! Thanks also to the Buckley lab for coming joint second with us
Amazing news, congratulations Maria!
One of several opportunities to come and do a PhD in our lab! Happy to discuss supervision in all things MRI analysis
For those that want to dig into this area, itβs worth reading a bit more about tools for understanding learning in ML. Iβd definitely recommend starting with @simonprinceai.bsky.social blogs on Gradient Flow and the Neural Tangent Kernel (NTK) (links from udlbook.github.io/udlbook/)
One of the authors wrote a very nice thread on this, so Iβll point to that rather than try and explain the methodology myself! bsky.app/profile/alic...
They found a divergence in the effective complexity of the model when run on the testing, rather than the training set, seemed to be linked to model generalisation. My interpretation of this is the model can use memorisation on training examples, but needs to interpolate on test examples.
In our new "Advanced Methods for Machine Learning" module, this week seminar dug into an upcoming NeurIPS paper arxiv.org/abs/2411.00247 that provides a new tool for analysing changes in effective model complexity when predicting on the training/test set over the course of training.
Over the last year or so I've been reading a lot more papers digging into why deep learning is effective.
It is counterintuitive for students learning about ML that phenomena like double descent and grokking are not fully explained despite us having access to the model weights and training dynamics!