Happening in half an hour at #NeurIPS2024!📢
Come by our poster in West Ballroom #5706 about the @mlcommons.org Croissant metadata format.
Paper: arxiv.org/pdf/2403.19546
Croissant format: github.com/mlcommons/cr...
Happening in half an hour at #NeurIPS2024!📢
Come by our poster in West Ballroom #5706 about the @mlcommons.org Croissant metadata format.
Paper: arxiv.org/pdf/2403.19546
Croissant format: github.com/mlcommons/cr...
Releasing SmolVLM, a small 2 billion parameters Vision+Language Model (VLM) built for on-device/in-browser inference with images/videos.
Outperforms all models at similar GPU RAM usage and tokens throughputs
Blog post: huggingface.co/blog/smolvlm
Meet Tülu 3, a set of state-of-the-art instruct models with fully open data, eval code, and training algorithms.
We invented new methods for fine-tuning language models with RL and built upon best practices to scale synthetic instruction and preference data.
Demo, GitHub, paper, and models 👇
I've found starter packs on NLP, vision, graphics, etc. But personally, I would love to know and hear from researchers working on vision-language. So, let me know if you'd like to join this starter pack, would be happy to add!
go.bsky.app/TENRRBb
Hey all! I started a second starter pack with people who didn't make the first one, please let me know if you'd like to be added:
go.bsky.app/JgneRQk
Can you add me pls, thanks!
After going to NAACL, ACL and #EMNLP2024 this year, here are a few tips I’ve picked up about attending #NLP conferences.
Would love to hear any other tips if you have them!
This proved very popular on another (more evil) social media platform, so sharing here also 🙂
My 10 tips:
The NLP labs starter pack is here! go.bsky.app/LKGekew Let us know if you want to be added!
Trying to bring ML/NLP/etal people from ETH Zürich together. Ping me to add you. 🙂
bsky.app/starter-pack...