"everyone hates llms"
hundreds of millions (and growing) are using llms regularly. techcrunch.com/2026/02/27/c...
thread of claims continues...
"everyone hates llms"
hundreds of millions (and growing) are using llms regularly. techcrunch.com/2026/02/27/c...
thread of claims continues...
Nothing to see, just very powerful pattern matching. www-cs-faculty.stanford.edu/~knuth/paper...
Would be nice to have terminal previews to test TUI apps.
Looks really nice, so basically instead of having a discrete GPU+VRAM, it computes with iGPU+RAM, kind of like unified memory in Macs (I was thinking about getting Mac Studio with 128GB+ RAM some day, but this looks a lot more affordable :))
It takes so much memory though, I can barely get 32k context with q4 of this model on 24GB VRAM card.
Polarising open-source into being pro-AI or anti-AI is going to be so good for FOSS in general...
It was given this challenge to prove its AI coding methods viable by being given a task thats unlikely to have lots of examples in the AI's training data. It of course accepted the challenge because it knows nothing about what this means.
That way its fair fight.
Gleam v1.14.0 is out now! Merry Christmas everyone! 🎁
gleam.run/news/the-hap...
I’m no expert, but isn’t multilayer perceptron network part of the transformer architecture?
It takes 0.04 liters of water to make a single AI image. Meanwhile, it takes 5 liters to make a single piece of paper, and presumably even more when you add a pencil to it.
Using AI to make art is literally better for the environment than pencil and paper. www.theatlantic.com/technology/a...
I shared a short succinct summary of my core strongest arguments that using chatbots is definitely not bad for the environment you can share with skeptical people in your life andymasley.substack.com/p/a-short-su...
AI voicing sounds like a game changer for game modding.
I have a hunch that current LLMs might make it easier to launch a brand new programming language, provided you can describe it in a few thousand tokens and ship it with a compiler and linter that coding agents can use simonwillison.net/2025/Nov/7/l...
A program adjusting its weights by looking at something is also arguably not infringement, but a fair use.
Gen AI is when ML has any output. So the only useful ML is the one that does nothing?
New data on the corporate ROI from generative AI from a large-scale tracking survey by my colleagues at Wharton.
They found that 75% already have a positive return on investment from AI, less than 5% negative. Also 46% of businesses leaders use AI daily. knowledge.wharton.upenn.edu/special-repo...
You’re (probably) measuring application performance wrong.
Humans have a strong bias for throughput.
"I can handle X requests per second."
Real capacity engineers use response-time curves.
I hope as we move past the first wave of AI criticism ("it doesn't work, all hype") we get a new wave of AI criticism rooted in the acknowledgement that, yes, these systems are very powerful & quite useful and focusing a deep exploration of when AI uses are uplifting and when they are detrimental.
First world problems
I’d also argue that being anti-AI is gatekeeping, for example people whose English is not the first language or aren’t great at writing, LLMs can help share their knowledge with the world.
(Edited 7:32)
AI/LLMs are a major accessibility technology — being against this technology is to be against advancing human accessibility, and i think there’s a case to be made that it’s borderline ableist
And with VAT, the B2B transactions are exempt from it, it’s only paid once by the end consumer, so it has little effect on overall economy. Meanwhile tariffs are a turnover tax, which compound each time a good passes the border.
if you're curious about the architecture and mechanics of LLMs, this site has a really excellent explorable interactive visualization. it helps build intuition for how massive these models are, what 'interpretability' means, and the complexity involved here
bbycroft.net/llm
Saying that we already know everything about LLMs because we know how they work on the lowest level is like saying that we know everything about mathematics by just defining axioms, or that we have computed everything just by inventing CPU.
good first issue label
hello tanglers! this october, we want to get more people into making their first ever open source contribution! to help them out, we've got the 🟣 'good-first-issue' label!
head to tangled.org/goodfirstissues to see a list of issues across different repos that you can contribute to! 🤓
That’s a very interesting part #rustlang
screenshot of repository settings on @tangled.sh/core. the "pipelines" tab is selected, and the spindle is configured to be spindle.tangled.sh. everybody is now welcome to join in the fun!
buckle up tanglers, we've been working hard! first up:
✨CI is now generally available!✨
head to your repo settings and set your workflow runner to the hosted instance.
🚀 we have also got support for secrets! you can now deploy your site on CI!
read the blog for more: blog.tangled.sh/ci
1/5
Graphic with a headline "Gleam in production" and a quote: “Adopting a new language is always a gamble, but Gleam has paid off. The belt-and-braces approach to safety and fault-tolerance has given us a system that just works, reliably, day in and day out, without constant babysitting and maintenance.” - Edward Kelly, Director of Technology at Strand. Navy background, blue and green text, pink lucy starfish in bottom right corner
See how Gleam gets the job done in production ⭐ Full case study coming soon:
“For a team like ours, with many other priorities and projects we need to work on, the confidence that Gleam gives us is worth its weight in gold.” - Edward Kelly, Director of Technology at Strand.
For us, Gleam is not just an alternative to JS, it’s a whole different way to write frontend and backend. The whole team can contribute, and even junior developers love it. Gleam is the best choice we made.
Guillaume Hivert, VP of Engineering at Steerlab, shares how being a Gleam business has been the best choice they have made!