Hope you enjoy the new survey! Much more to come out of my lab soon, so watch this space!
Hope you enjoy the new survey! Much more to come out of my lab soon, so watch this space!
You might remember that a few months ago we released a v1 version of the same survey. The structure is quite similar, but a truly insane number of papers have come out since. I think we looked at and added 50-100 new papers to our survey.
kennethmarino.com/computeruse/...
Collaboration with my student Md Farhan Ishmam and colleague @anamarasovic.bsky.social.
We give a high-level view of what Computer Use means, what we mean by โagentsโ and do a survey of datasets and methods in Computer Use. Farhan made the really beautiful agent diagram.
Been less than a year since I started my lab at @utah.edu and we already have a ton of new stuff that I canโt wait to talk about soon.
Iโll start today by sharing that our updated Computer Use Survey blog has been accepted to ICLR Blogposts 2026.
iclr-blogposts.github.io/2026/blog/20...
Not to late to apply for graduate admission at Utah for Fall of 2026. cs.utah.edu/graduate/prospโฆ
Apps due Dec 15
I'm looking for strong students interested in VLM agents with applications in Computer Use and Robotics. Please apply and mention me in your application if you're interested!
We hope this survey is useful and fun for the community! We couldnโt include everything, but tried to at least give a good overview of the field. Happy to hear feedback and if you think we messed something up, feel free to DM or email me.
Thereโs a lot of great stuff in here we think! We cite over 100 papers and websites. One thing I am very happy about is how easy it is to follow links in our survey to the bibliography which then links to the papers directly.
Then we talk about the LLM-Agent approaches and try to explain and make some sense of the many components that make up an LLM-based Computer Use Agent.
We then spend a lot of time looking at the different earlier (Pre-LLM) approaches to the problem, including the RL-from scratch period and even the very earliest planning-based approaches.
We try to categorize all the environments and datasets in common use and let users click/filter and browse through each of the datasets.
First, we try to ground our survey, say what we even mean by โComputer Useโ and define some key terms, grounded in the classical agent-environment framework.
You can view the survey here: kennethmarino.com/computeruse/...
We tried to make it as interactive and fun as possible, including a retro DOS theme to go along with the subject.
Credit to Claude for helping me create the website :)
Super excited that the Computer Use survey I've been working on w/ @anamarasovic.bsky.social for a while now is ready! Originally we were planning on a more traditional survey paper but as more surveys came out we decided on an interactive website survey.
Arriving to #ACL2025 #ACL2025NLP in a few hours!
See you at the welcome reception & catch me at the poster session on ๐๐ฎ๐๐ฌ๐๐๐ฒ (๐๐ฎ๐ฅ๐ฒ ๐๐) ๐๐ญ ๐๐:๐๐๐๐ฆ, where Jesse will present our work introducing new tasks for supporting legal brief writing: arxiv.org/abs/2506.06619
I canโt find it but my favorite was when someone asked ChatGPT to set an alarm for them and it pretended to set one and the person missed their important meeting
Also, this is my first paper (hopefully of many) with my
@utah.edu colleagues! Feel very welcomed so far and really excited about the things we'll be able to do together. And we just had another great hiring year with several new colleagues, so expect lots of exciting stuff soon!
Read Fateme's full thread, but what I find interesting about the paper is that LLMs are already pretty good at summarization, but is still quite bad at finding relevant cases. With many retrieval benchmarks becoming saturated, I think this is an exciting place for new work!
Really excited about this!
As backstory, Jesse Woo started this project when I taught a ML Datasets class at Columbia.
Then we joined up with @anamarasovic.bsky.social and @fatemehc.bsky.social and really kicked it into high gear. Would not have happened without the full team!
Join us on June 11, 9am to discuss all things fine-grained!
We are looking forward to a series of talks on semantic granularity, covering topics such as machine teaching, interpretability and much more!
Room 104 E
Schedule & details: sites.google.com/view/fgvc12
@cvprconference.bsky.social #CVPR25
We are so excited to have this amazing line-up of speakers!!
Randall Balestriero, Kai Han, Mia Chiquier, Kenneth Marino (@kennethmarino.bsky.socialโฌ), Elisa Ricci, Thomas Fel (@thomasfel.bsky.socialโฌ)
We just dropped a new paper on studying LLMs on the โBlicket Testโ to ask the question: do language models explore like adults or like children? We also show how to get them to act more like children (i.e. more like scientists). All credit to Anthony and team, this came together super well!
Really glad you like the paper! Anthony and team did a great job on this.
Are you tired of your static fixed benchmarks? Feel like your data is in a rut. You want to change something but you just feel stuck? Try ReCogLab!
Really proud of this work and of my fantastic colleagues at Google DeepMind who put in so much hard work.
See you all in Singapore!
You donโt know me man. Get off your high horse. Blocking you now
I literally do none of those things. I donโt work in any of these areas. I think you need to step back and ask why youโre fighting random researchers who donโt decide these things instead of the people you actually seem mad at
?????
I post about AI papers, what on Earth are you talking about?
People who actually believe in the promise of AI should be the most upset about the over-claiming, over-hyping and overt secrecy and unwillingness to expose your work to scrutiny that has come to characterize much of the โfeel the AGIโ crowd.
This field is literally so old that there was famously a paper calling the field overhyped called the Lighthill Report in 1973 that caused funding to plummet. Weโve literally already went through at least a few hype cycles.
This is why open source and publishing is important. Maybe OpenAI didnโt do anything sus with held out splits. But if code and models are never released and the experiments and methods are not published or described in sufficient detail, we canโt reproduce it or scrutinize any of these decisions.
Just read a fantastic web agent paper. Game changer!
* Treats it as an RL problem
* Trains rather than just prompting
* Beats closed models
* Releases code and model so other people can build off of their work
Many great ideas in this paper too, definitely read
arxiv.org/pdf/2411.02337