Vaguepostmaxxing
Hand-annotation is underrated for matching benchmarks
Big if true 👀
Nice squad! Good luck.
Classic sneaky enrollment. No barrage of emails threatening desk reject thus far, might be ahead of us :)
I second this. Feel great.
Yes exactly, ViT-scale training of around 300M-600M params is the ballpark here
Happy accidents like forgetting to turn off a run and then taking vacation never happens with lr schedule (I guess it never happens on our clusters either, but hypothetically :D)
Just tugging along at a safe LR for a long time just feels good in the spirit. DINOv3 style (without the degradation in feature quality, dont mind that part...)
Shoutout Västerås btw (from your picture). Pretty middling Swedish town XD
As LR decays I empirically find grads to almost always go up. Has a risk to derail your training loss. Almost never happens to me with constant LR. Especially with big networks. Also you can resume at any time and leave training on for as long as you want without a schedule :)
Biggest psyop: Learning rate decay. @parskatt.bsky.social showed me the truth but I refused to accept it...
Hehe I am a zoomer, only digital :)
Malmooo
I dont like the neurips template. Feels like you have to do hacky stuff to fit your figures / tables.
I am starting to like the ECCV template. It feels like you can include more tables / figures and thus I can fit alot more within the page limit. Might be bad for readers :)
East coast US also going to sleep without decisions out 🤔
ICLR: Radio silence and then full reset due to openreview leak... :)
Big!
"Patience you must have my young researcher"...
Malmoooo, let's go!
RoCo = Robust Correspondences
He died for our sins
Interesting :), thanks for sharing
1337 > 1024
Sounds nice!
Congratulations! Well deserved!
Hehe yeah I did not interpret the email as our submissions would be redacted FBI-style :D
RIP those who stayed up for an early submission number... :)