8/8 Anyways, it's nice paper and a refreshing read in the era of LLMs: arxiv.org/abs/2510.04871
@ismael-velasco
Involuntary polymath. Writer, poet, storyteller, mime. Software engineer (green compute, Web & AI). Peer reviewed author (sustainability, humanities, social sciences). Social activist. Inadequate human & Bahá'í. linkedin.com/in/IsmaelVelasco
8/8 Anyways, it's nice paper and a refreshing read in the era of LLMs: arxiv.org/abs/2510.04871
From the Hierarchical Reasoning Model (HRM) to a new Tiny Recursive Model (TRM).
A few months ago, the HRM made big waves in the AI research community as it showed really good performance on the ARC challenge despite its small 27M size. (That's about 22x smaller than the smallest Qwen3 0.6B model.)
Updated & turned my Big LLM Architecture Comparison article into a video lecture.
The 11 LLM archs covered in this video:
1. DeepSeek V3/R1
2. OLMo 2
3. Gemma 3
4. Mistral Small 3.1
5. Llama 4
6. Qwen3
7. SmolLM3
8. Kimi 2
9. GPT-OSS
10. Grok 2.5
11. GLM-4.5/4.6
www.youtube.com/watch?v=rNlU...
A short talk on the main architecture components of LLMs this year + a look beyond the transformer architecture: www.youtube.com/watch?v=lONy...
I just saw the Kimi K2 Thinking release!
Kimi K2 is based on the DeepSeek V3/R1 architecture, and here's a side-by-side comparison.
In short, Kimi K2 is a slightly scaled DeepSeek V3/R1. And the gains are in the data and training recipes. Hopefully, we will see some details on those soon, too.
My new field guide to alternatives to standard LLMs:
Gated DeltaNet hybrids (Qwen3-Next, Kimi Linear), text diffusion, code world models, and small reasoning transformers.
🔗 magazine.sebastianraschka.com/p/beyond-sta...
Poetry as a vector for breaking AI constraints—verse prompts jailbreak LLMs up to 18x more effectively than prose (90%+ in some models). A fascinating case study in how aesthetic structures interact with computational control systems.
arxiv.org/abs/2511.15304
I think Claude Code has achieved AGI
The Latent Role of Open Models in the AI Economy - papers.ssrn.com/sol3/papers.... move to open would "generate an estimated $24.8 billion in additional consumer savings across 2025" (v @azeem.bsky.social) #openness
Sycophantic AI agrees 50% more than humans even with harmful conversations: arxiv.org/abs/2510.01395
Without embracing Weber, purely observationally from fairly immersive travel, I also notice the valorisation of clock time and regimentation is higher in protestant than Catholic cultures as a whole. 9-5 exists in Switzerland and Spain, but it is weighted very differently. Certainly a nuanced theme!
There is a huge difference I think in a 9-5 dictated by an individual or organisation, and one predicted by the seasons, the markets, the children, the organic necessities of life. In UK for instance, waking up at 11am is intrinsically if vaguely immoral, a bit shameful. In Greenland it's fine.
When I think about it it's not primarily about the keeping of time but an orientation and valoration of time. Maaaai women in particular had quite a regimented day, and clock time was occasionally used and certainly compatible. But it was event oriented rather than command oriented, more elastic.
Makes sense, I appreciate the reply. Anecdotally I find even now the rhythms and ethics of time are less regimented and more organic the further from industrialised society. I found that in my time among indigenous communities, and Global South, consistently in 4 continents and a dozen countries.
AI for scientific discovery is a social problem, and just adding more compute per Sam Altman may not actually cure cancer.
arxiv.org/html/2509.06...
$380 million - the amount the Senate wants to allocate just for AI and automation for the department of defense
Do you think they'll earn that back, or is that just a sunk cost for us to pay the government to track our data?
AI and automation could eliminate nearly 100 million jobs in the U.S. in the next decade, a report set to be released by the Senate on Monday finds. #climatechange #climatecrisis #globalwarming
www.axios.com/2025/10/06/a...
I also think education will be the frontier where this happens first, and that in the (quite) long term we will develop similar post-pathologising approaches to diversity in cognitive, emotional and life presentations and experiences, and a lot of suffering will vanish + a lot of capacity released.
This is great. For about 20y I've been predicting that we'll first multiply diagnostic categories and diagnosed people, until teachers would be confronted with vast enough and frequent enough diversity of challenges, that we would arrive at universal design and diagnoses would become less relevant.
I'm with @ethanlandes.bsky.social's read. Capitalism existed before but academic publishing was not remotely as monetisable as now. Perverse incentives of publish or perish were lower and narrower. We didn't have industrial scale citation farms, or a comparable pay-to-publish addressable market.
I enjoyed this nuanced, humble and thought provoking paper on what I would call (surely not the first) AI displacement anxiety. This focuses on philosophy, but broadens the frame toward the end. I think I might start a list on displacement anxiety, pieces like this feel like canaries in the mine
Challenge Poverty Week
Day 1: Social Security - Building a Foundation for a Better Life
As part of Challenge Poverty Week, we are drawing attention to the vital financial support available to families through Social Security Scotland.
Visit the website for more info:
www.socialsecurity.gov.scot
Here are some examples:
github.com/Leo-nova/Pro...
www.reddit.com/r/MyBoyfrien...
They are quite naive, and would be more effective with a code scaffold and RAG-style enhancements, but it would take maybe 50-100 times the work to get similar but much less adaptive results via Chatscript.
I should add that for more sophisticated personas with memories+, there are entire and rather odd communities dedicated to creating them with LLMs, convincingly enough to consider them boyfriends or girlfriends and in more extreme cases marry them. They exchange tips and tricks on this in reddit+.
Yes. It might take a few tries to hone, but: "reply as X, with such and such attitudes, age, etc" would get you pretty fa. Almost certainly further than trying to replicate such a personality by fun but extremely laborious and difficult rule and conversation crafting with Chatscript.
I worked with Chatscript in depth from 2018, briefly collaborating with Bruce Wilcox, who created Chatscript for the Loebner winning bots. Then ML conversational agents were too crude compared to Chatscript ones. Today it would take minimal prompting for an AI chatbot to outperform any Loebner bot.