looking forward to see how this compares to claude code...if ads are good and non-intrusive, it might be the way forward
looking forward to see how this compares to claude code...if ads are good and non-intrusive, it might be the way forward
access to sota coding agents without paying a dime (ad-based model)
Choose 20 books that have stayed with you or influenced you. One book per day for 20 days, in no particular order. No explanations, no reviews, just covers.
3/20
#BookChallenge
#Books
#BookSky ๐๐
#20daybookchallenge
Choose 20 books that have stayed with you or influenced you. One book per day for 20 days, in no particular order. No explanations, no reviews, just covers.
Iโm joining in with the book challenge :)
#BookChallenge
#Books
#BookSky ๐๐
#20daybookchallenge
2/20
Choose 20 books that have stayed with you or influenced you. One book per day for 20 days, in no particular order. No explanations, no reviews, just covers.
Iโm joining in with the book challenge :)
#BookChallenge
#Books
#BookSky ๐๐
#20daybookchallenge
1/20
Thanks for the great suggestion ๐ Seems to work quite well. github.com/thomasht86/a...
Now I just have to streamline the workflow a bit.
uv โค๏ธ
๐
Hf link?
@github.com when will you add this to android/iOS?
@simonwillison.net ?
Is there a tool that lets me create a github PR (private tepo) with voice from my phone, or do I have to make one?
Yes
Are there limits to what you can learn in a closed system? Do we need human feedback in training? Is scale all we need? Should we play language games? What even is "recursive self-improvement"?
Thoughts about this and more here:
arxiv.org/abs/2411.16905
A librarian that previously worked at the British Library created a relatively small dataset of bsky posts, hundreds of times smaller than previous researchers, to help folks create toxicity filters and stuff.
So people bullied him & posted death threats.
He took it down.
Nice one, folks.
Excited to try this! Smaller and better ๐
Interesting! Thanks for sharing
๐
Releasing SmolVLM, a small 2 billion parameters Vision+Language Model (VLM) built for on-device/in-browser inference with images/videos.
Outperforms all models at similar GPU RAM usage and tokens throughputs
Blog post: huggingface.co/blog/smolvlm
Don't joke about this
with this growth, how long before someone from @bsky.app will reach out to consider moving from opensearch to vespa ๐
Just adding a comment and some emojis to boost Simon in my feed ๐๐
Gonna try this out! Thanks
And a lot of the LLM bias might be a lot more subtle than this. Think LLM-assisted judgments/scoring ๐ฌ