๐ The open source community is unstoppable: 4M total downloads for DeepSeek models on @hf.co , with 3.2M coming from the +600 models created by the community. That's 30% more than yesterday!
๐ The open source community is unstoppable: 4M total downloads for DeepSeek models on @hf.co , with 3.2M coming from the +600 models created by the community. That's 30% more than yesterday!
๐ซ Generate RAG data with the Synthetic Data Generator to improve your RAG system!
1๏ธโฃ Generate from your documents, dataset, or dataset description.
2๏ธโฃ Configure it.
3๏ธโฃ Generate the synthetic dataset.
4๏ธโฃ Fine-tune the retrieval and reranking models.
5๏ธโฃ Build a RAG pipeline.
Screenshot of the Introduction to Argilla in Chapter 10 of the Hugging Face NLP course
New chapter in the Hugging Face NLP course! ๐ค ๐
We've added a new chapter about the very basics of Argilla to the Hugging Face NLP course. Learn how to set up an Argilla instance, load & annotate datasets, and export them to the Hub.ย
Any feedback for improvements welcome!
Screenshot of this text: Total annotations submitted: 50,035 Languages with annotations: 115 Total contributors: 419
๐ 50,000+ annotations reached! The FineWeb2-C community is helping build better language models on annotation at a time.
๐ Current stats:
- 115 languages represented
- 419 amazing contributors
- 24 languages with complete datasets
But we're not done yet! ๐งต
You could try to generate one with this tool:
huggingface.co/spaces/argil...
High-quality data for fine-tuning language models for free and at the click of a button!
Prompt and wait for your dataset to push to Argilla or the Hub
Evaluate, review and fine-tune a model.
Blog:
Was 2024 the year of datasets? Is 2025 the year for community-built datasets?
It's exciting to see the progress of many languages in FineWeb-C:
- Total annotations submitted: 41,577
- Languages with annotations: 106
- Total contributors: 363
Progress bars showing remaining annotations needed for 15 languages in FineWeb-C dataset, ranging from 6 to 593 annotations needed
The finish line is near! We're building FineWeb-Edu for many languages and need your help ๐ค
Many FineWeb-C languages are close to 1,000 annotations!
Assamese is 99.4% done, French needs 64 more annotations, Tamil: 216.
Please help us reach the goal: huggingface.co/spaces/data-...
Release notes:
github.com/argilla-io/a...
๐ฅ Ending 2024: A full data annotation journey on the Hugging Face Hubโfrom raw data to training-ready datasets!
With Argilla 2.6.0, push your data to the Hub from the UI
Letโs make 2025 the year anyone can build more transparent and accountable AIโno coding or model skills needed.
๐ Argilla v2.6.0 is here! ๐
Let me show you how EASY it is to export your annotated datasets from Argilla to the Hugging Face Hub. ๐คฉ
Take a look to this quick demo ๐
๐โโ๏ธ More info about the release at github.com/argilla-io/a...
#AI #MachineLearning #OpenSource #DataScience #HuggingFace #Argilla
๐ฅ We got great feedback on this: "Synthetic Data Generator"
A no-code tool to create datasets with LLMs, making it a breeze, allowing ANYONE to create datasets and models in minutes and without any code.
Blog: https://buff.ly/4gybyoT
GitHub: https://buff.ly/49IDSmd
Space: https://buff.ly/3Y1S99z
Well, around 10 percent of the initial goal is complete, and so far, it's been quite a one-man army effort. We're still in the hunt for more people to join and contribute to this open-source initiative.
@hf.co
data-is-better-together-fineweb-c.hf.space/share-your-p...
The sprint for crowd sourced annotations with argilla is in full swing over at data-is-better-together-fineweb-c.hf.space
I've just contributed 100 examples to this dataset:
data-is-better-together-fineweb-c.hf.space/share-your-p...
Big thanks to @dvilasuero.hf.co, @nataliaelv.hf.co and team ๐
I've been building a small library for working with prompt templates on the @huggingface.bsky.social Hub: `pip install prompt-templates`. Motivation:
The community currently shares prompt templates in a wide variety of formats: in datasets, in model cards, as strings in .py files, as .txt/... ๐งต
Desperate to contribute to the development of Scots language AI. I've just contributed 16 examples to this dataset:
data-is-better-together-fineweb-c.hf.space/share-your-p...
I've just contributed 156 examples to the FineWeb 2 Spanish dataset:
data-is-better-together-fineweb-c.hf.space/share-your-p...
If you want to contribute, sign in with @hf.co and find your language
Join this Space, search for your language, and start contributing:
huggingface.co/spaces/data-...
Don't know how to start, want to discuss? Join:
huggingface.co/spaces/Huggi...
Help shape the future of multilingual Open Source AI!
Join the FineWeb 2 Community Annotation Sprint to create an open training dataset with full transparency and human validation in many languages.
Review datasets in your language and help identify the best sources for training.
โจ Argilla 2.5.0 is live and it comes with webhook listener support to supercharge your workflows! ๐
#AI #MachineLearning #Webhooks #TechUpdate
๐ Open Image Preferences is an Apache 2.0 licensed dataset for text-to-image generation by the @hf.co community. This dataset contains 10K text-to-image preference pairs across image generation categories, using different model families and prompt complexities.
Blog: huggingface.co/blog/image-p...
Open Image Preferences released! ๐
- Open-source dataset for text2image
- 10K samples manually evaluated by the HF community.
- Binarized format for SFT, DPO, or ORPO.
It comes with a nice blog post explaining the steps to pre-process and generate the data, along with the results.
Added!!
I'd love to yes!!
thanks Pasquale, I remember you recommended the MMLU Redux paper when we started this project. I've been in charge of the human annotation / Argilla part and unfortunately didn't find the time to check this curation process
Could you share the pointer to the curated version to see what can be done?
Open dataset: huggingface.co/datasets/Coh...
Paper: arxiv.org/pdf/2412.03304
Announcing Global-MMLU - an improved MMLU Open dataset with evaluation coverage across 42 languages.
The result of months of work with the goal of advancing Multilingual LLM evaluation.
Built together with the community and amazing collaborators at Cohere4AI, MILA, MIT, and many more.
We're about to launch the biggest collaboration effort since the Open Assistant.
Let's get the highest quality data for open foundation models with all the nuances & diversity of each language, all with data provenance and transparency
Join us as language lead:
docs.google.com/forms/d/10XI...
Screenshot of a dashboard showing the number of languages with a lead and languages without a lead
Next week we're launching a collaborative annotation effort to build a big multilingual dataset, so you can have high-quality data in your language.
We are really close to getting leads for 100 languages! Can you help us cover the remaining 200?