On my way to WACV 2026! Looking forward to seeing you there! @wacvconference.bsky.social
On my way to WACV 2026! Looking forward to seeing you there! @wacvconference.bsky.social
Just imagine how many good ideas were never thought because people just tested llm suggestions โฆ
Yeah, got it. But it still feels random. Maybe it will work, maybe not, but how is this advancing our knowledge of anything? what about improvements that people came up with because they had a new idea about how the world works or not and wanted to test it? โฆ this is what we currently loose.
On another note, this also means that AI doomsday is called off because AI is currently so busy processing its own slop that we donโt have to worry about singularity in the near future ๐คฃ
Iโm not sure if I buy this story. Currently as AC if feels like just organizing llm output from papers to reviews and backโฆ without a lot of good ideas in sight. LLMs just regress everything to the mean, which doesnโt help if youโre looking for novelty/originality.
๐ข Call for Papers is OPEN!
Submission DL: March 9, 2026 (AoE)
Notification: March 20, 2026
Tracks:
- Archival: 8 pages, CVPR format
- Non-Archival: 4-page abstracts (CVPR main conf papers welcome!)
Submit: openreview.net/group?id=thecvf.com/CVPR/2026/Workshop/MMFM5
mmfm-workshop.github.io
The 5th edition of the MMFM Workshop is coming to @CVPR 2026!
"What is Next in Multimodal Foundation Models?", exploring the frontiers of vision, language, and beyond.
June 2026 | Denver, CO
Details in thread ๐
Co-organized with Krishnakumar Balasubramanian (UC Davis), Rogerio Schmidt Feris (MIT-IBM Watson AI Lab), Benjamin Hoover (Georgia Tech, IBM Research), Hilde Kuehne (@hildekuehne.bsky.social), and Zhaoyang Shi (Harvard).
CDS Silver Professor Julia Kempe (@kempelab.bsky.social) is co-organizing this year's ICLR 2026 Workshop on New Frontiers in Associative Memory.
The workshop is accepting submissions related to associative memory until Feb 14.
nfam2026.amemory.net
Excited to share our new paper: M2SVid: End-to-End Inpainting and Refinement for Monocular-to-Stereo Video Conversion! ACCEPTED by 3DV 2026!๐ฌ
๐ Project: m2svid.github.io
๐ Paper: arxiv.org/abs/2505.16565
Done with Goutam Bhat, Prune Truong, @hildekuehne.bsky.social Federico Tombari ๐งต๐
Congratulations Jacob Chalk who passed his PhD viva @compscibristol.bsky.social on
"Leveraging Multimodal Data for Egocentric Video Understanding" w no corrections
๐ in ICASSP23 CVPR24 CVPR25 3DV25 TPAMI25
jacobchalk.github.io
๐examiners @hildekuehne.bsky.social @andrewowens.bsky.social &Wei-Hong Li
I think they try their best, but itโs not a trivial businessโฆ I can actually understand arguments for both sides, but OR integration has so signifincantly progressed, I donโt think there will be a going back โฆ would still like OR to fix some more issues thoughtโฆ ๐ซ
Back to cmt? ๐ โฆ
Ok, there is no real solution โฆ I think we just have to weather the hype and hope that at some point all people who just want to make money go back to finance, crypto, I-donโt-care-what โฆ as soon as people realize that there are no jobs anymore, many will move onโฆ
Great news from the group of Hilde Kรผhne, Professor of Multimodal Learning at the Tรผbingen AI Center: the team had two papers accepted at Hashtag#ICCV2025 โ a strong result at one of the leading international conferences in computer vision.
Wordle 1.602 3/6
โฌโฌโฌ๐ฉโฌ
๐จโฌโฌ๐จโฌ
๐ฉ๐ฉ๐ฉ๐ฉ๐ฉ
second day in a row โฆ ๐ฅณ
Thatโs the pile of posters after 3 days of conferenceโฆ
๐น Job alert: Coordinator at LUMI AI Factory Hub
๐Espoo ๐ซ๐ฎ
๐
Apply by Nov 19th
https://www.aalto.fi/en/open-positions/coordinator-lumi-ai-factory-hub
Goodby @iccv.bsky.social ! ๐ซถ๐บ
How can #AI help us understand & protect our planet? ๐
Join us for the AI for Earth & Climate Sciences Workshop, part of the ELLIS UnConference (in Copenhagen ๐ฉ๐ฐ on Dec 2), co-located with #EurIPS.
๐ Submit workshop contributions by Oct 24, 2025
๐ All info: eurips.cc/ellis
Track-On2: Enhancing Online Point Tracking with Memory
By Gรถrkay Aydemir, Weidi Xie, @fguney.bsky.social
TLDR: explicit memory for point tracking, holds decoder features, similar to what MUSt3R does for reconstruction.
arxiv.org/abs/2509.19115
Join us TODAY for the 3rd Perception Test Challenge perception-test-challenge.github.io @iccv.bsky.social
Ballroom B, Full day
Amazing lineup of speakers: Ali Farhadi, @alisongopnik.bsky.social, Phlipp Krahenbul, @phillipisola.bsky.social
TODAY! Artificial Social Intelligence Workshop @iccv.bsky.social
Room 317B, Full day
Social reasoning, multimodality, and embodiment!
Speakers: Evonne Ng, @tianminshu.bsky.social @hyunwoo-kim.bsky.social, @diyiyang.bsky.social ang.bsky.social, @hokulabs.bsky.social, @michael-j-black.bsky.social
The #ICCV2025 workshop open access proceedings is up:
openaccess.thecvf.com/ICCV2025_wor...
Enjoy the conference everyone!
HUGE thank you to our #ICCV2025 sponsors for their incredible support!
Happening TODAY at #ICCV2025 the 1st workshop on Geometry-Free Novel View Synthesis & Controllable Video Models! (Or simply: โ3D Vision in the era of Video Modelsโ)
Better grab a seat early ๐
๐ฃ starts 8:30โฏa.m
๐ 323 C
geofreenvs.github.io
And the answer is 0.9999 for the win.
Lower confidence value always gives you worse results for more or less same runtime.
More in our RANSAC Tutorial at #ICCV2025
Join us tomorrow for the Joint Workshop on Marine Vision at #ICCV2025 (@iccv.bsky.social), from 9-6 in Room 318B!
Check out the full program here: vap.aau.dk/marinevision/
@elliot-eu.bsky.social โฆ can we have an ELLIOT starter pack?
PhD Position open, through the ELLIS PhD program (ellis.eu/news/ellis-p...), at the Computer Vision Centre (www.cvc.uab.es), Barcelona! Linked to #ELLIOT project (www.elliot-ai.eu) a major European initiative to develop #open, #trustworthy and #multimodal #FoundationModels. @elliot-eu.bsky.social
As Jitendra Malik is saying "Robotics is too important to leave it to the robotics people".