New blog post: "GNU and the AI reimplementations" antirez.com/news/162
New blog post: "GNU and the AI reimplementations" antirez.com/news/162
Who thinks "clean room" is needed to reimplement and put it into a new license does NOT understand copyright. Clean room is a trick to make litigation simpler, it is not mandated by law: rewrites are allowed. The new code just must not copy protected expressions. Linus was Unix-aware.
Every morning there are two news: the news about the release of some major upgrade to some AI, and the news about the lack of DeepSeek v4 release.
Implementing a clear room Z80 / ZX Spectrum emulator with Claude Code: antirez.com/news/160
Users report that asking Sonnet 4.6, via the Anthropic API, "What's your name", it reports "DeepSeek" with a high frequency. Labs cross-fine-tune on chain of thoughts and use other models as RL signal consistently. Also the pretraining, a *key* step, is on mostly public data. Anthropic disappoints.
Il post code & methodology, and the anti-contamination steps taken.
A clear room emulator of the Z80, the Spectrum, and CP/M written by Claude Code. Vey skeptical with the compiler experiment (more complex for sure) made by Anthropic because of the fundamental flaw of not providing the agent with the specifications / papers.
You see, I'm ok with AI usage in many ways. Yet I can't understand why people that don't have deficiencies in written expression use it for writing. Emails. Blog posts. Comments. Why? We only lose something in this way. We want your voice.
Today I had to fight with GPT 5.3 to defend my position on the complexity of a specific command of the new Redis type I'm adding (released soon I hope). It had a great point about the worst case, but the typical case was as I claimed. We reached an agreement mentioned both... :D
Besides, Amodei has - in my opinion - a personal role in the wake up call of China about GPUs. It was unavoidable but certain words even speedup the process, perhaps.
You know what happens with Nvidia ban for the Chinese market? That 1.5 technologically advanced and capable humans said "maybe we can use our own GPUs". You know what's going to happen with Anthropic-style AI usage bans, right?
After 19 years, the version 2 of the Picol interpreter is out. Features [expr] in ~40 lines of code, floats, globals, can run mandelbrot.tcl, and so forth. The code is now more functional and more readable at the same time. 654 lines of code now.
github.com/antirez/picol
On the stress induced by automatic programming (English audio and subs available): youtu.be/id9QG-mQSOo?...
But in general, Transformers don't have much special: if you have data, GPU, a reinforcement learning pipeline that works, you can build a frontier model. Everybody attempting seriously is making it. I don't believe much is a technology that in the long run you can "lock in".
I'm finding it very hackery, but this depends *a lot* on the way you use it to be honest. To converge to a flat lack of understanding is very simple. Btw so far Chinese models are avoiding the AI oligarchy of "the few", thanks to Kimi 2.5, GLM 5, and soon DeepSeek 4: if this stops we are fucked.
Of all human feelings, envy is the one I despise the most.
@timkellogg.me is right. I'm feeling this a lot. It's like suddenly you can fly, so you need to go somewhere even if you don't need to go anywhere. Very simple to burnout this way.
Claim: OpenAI is leaving a lot of money on the table (and a lot of Codex users) for not having a plan between 20$ and 200$
Flux2.c is now Iris.c (greek goddess Iris, messenger of the gods and personification of the rainbow), and adds support for the zImage Turbo model as well. Calling projects after company products names is a bad long term idea.
github.com/antirez/iris.c
Btw ambient noise does note help the encoder to do a stellar work. Much better when there isn't too much noise.
Yep it's quite incredible. There aren't noticeable "distortions" but if you really know the speaker you could tell something is a bit different. It captures ambient noise quite well, too. Never tested on music so far. Should NOT work.
Imagine creating the UX for HBO Max and deciding to make the screen almost black each time somebody moves the cursor or acts with the TV remote, or each time the show starts, for 10 seconds. How broken products design is, in 2026?
Note that I totally get that Claude Code is far better *practically* at many things that don't require to be a so advanced thinker, but: 1. I'm more interested in very hard problems. 2. I believe it is simpler for Codex to learn Claude smartness, than Claude Codex intelligence.
WTF the Qwen3-TTS encoder/decoder compress wav files of 100 times... Compression is now GPU-bound, no longer algo-bound, for the most part. The same is happening for images and videos as well, just not practical because of speed for now.
The 20$ codex plan is worth more than the $200 Claude Code plan.
Much faster now.
Give a look at how simple the inference pipeline is here, in the encoder side: github.com/antirez/qwen...
The way those transcribing models work, with audio -> FFT -> MEL -> Conv2D -> self attention feed to what is, basically, a decoder only LLM (autoregressive with the emitted tokens AND the audio embeddings generated by the encoder) is one of the MOST fascinating things in AI, IMHO.