Nimo πŸ³οΈβ€πŸŒˆ's Avatar

Nimo πŸ³οΈβ€πŸŒˆ

@nimobeeren.com

he/him Building cool things with or without AI (mostly with) β€” πŸ§ͺπŸŽΉπŸ’»πŸŒΈπŸ“·πŸ”§πŸ“” 🌐 nimobeeren.com πŸ“ Eindhoven

1,645
Followers
315
Following
671
Posts
23.04.2023
Joined
Posts Following

Latest posts by Nimo πŸ³οΈβ€πŸŒˆ @nimobeeren.com

Preview
From the iphone community on Reddit: My hacked iPhone running iPadOS! And running a Mac-like experience on the external monitor! It can multitask + run iPad apps. Apple doesn't allow this as it would ... Explore this post and more from the iphone community

I’ve seen posts of people installing iPadOS on iPhones and it works pretty much as you describe, you just plug it in to a monitor and have a desktop computer

Example:

www.reddit.com/r/iphone/s/G...

10.03.2026 21:40 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I think the model labs are doing this to some degree. Claude definitely tries to maintain a high information density in convos. It leans heavily on terms and comes up with new terms to refer to things. Maybe less from a token efficiency perspective, but useful for readability I think.

10.03.2026 17:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I love the mention of subtly conflicting values! Having all the values aligned feels too easy and unlikely to be inspiring.

09.03.2026 19:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

What is it?

08.03.2026 16:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The only failure mode I’ve experienced is having the wrong audio input device selected πŸ˜…

08.03.2026 08:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I use Apple voice memos on my phone which hasn’t failed me yet! It also has transcription built in which contains some mistakes but LLMs can often infer from context.

On my laptop I use Superwhisper which also saves transcription and raw audio so you can always transcribe it again if needed.

08.03.2026 08:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Hahaha, you mean the pink overlays?

05.03.2026 22:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

Back on that virtual try-on shit ✨

05.03.2026 20:31 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This looks cool! I’ll be there 🌊

02.03.2026 11:39 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Interesting to see that Rust is still harder for LLMs than other languages, and this matters on hard tasks! It’s not automatically the best choice when you need something fast and reliable.

25.02.2026 07:42 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes, the performance gains are mostly Vite’s accomplishment. But even if this was a regular refactor (which I’m pretty sure it isn’t), that’s a lot of API to cover.

24.02.2026 21:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I don’t have a deep understanding of what Next.js does, but the message of β€œwe reimplemented the entire API surface of this heavily iterated 10 y/o project and made it 4x faster in one week” is just absolutely wild

24.02.2026 21:53 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Wow, I didn’t know this exists! Can’t keep up with this stuff πŸ˜…

22.02.2026 22:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I love that this exists!

22.02.2026 21:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

From subjective experience, Sonnet now feels just as slow as Opus. Maybe not in raw tokens, but in outcomes.

I’d love to see model labs start measuring this more.

22.02.2026 21:45 πŸ‘ 10 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

It’s interesting to see the variety of ways you interpret this, but you’ve responded three times to this post now, FYI!

22.02.2026 17:51 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

does this ever help you?

22.02.2026 16:52 πŸ‘ 1 πŸ” 0 πŸ’¬ 3 πŸ“Œ 0

I'm afraid I'm going to be reposting this every day until the far side of the singularity.

Work on the things you want to see more of!

Work on flourishing!

Work on things that are freeing!

21.02.2026 00:39 πŸ‘ 27 πŸ” 8 πŸ’¬ 0 πŸ“Œ 1

Maybe we should rename β€œman pages” to β€œhu pages” then 🀭

20.02.2026 14:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Wdym? Humans do too, right?

20.02.2026 13:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

And that documentation can and often should be different depending on target audience: human or AI.

Humans need a story and intrigue, AI will grind through dry docs all day. But if it works for humans, it will almost certainly work for AIs.

20.02.2026 11:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is so true and I'm happy it turned out this way. Humans + AI both benefit.

The reason why it's MORE important now is that LLMs forget everything while humans learn from trial and error. We need to compress all that learning into documentation.

20.02.2026 11:48 πŸ‘ 22 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
GDPval-AA Leaderboard | Artificial Analysis Compare AI model performance on GDPval-AA Leaderboard. GDPval-AA is Artificial Analysis' evaluation framework for OpenAI's GDPval dataset. It tests AI models on real-world tasks across 44 occupations ...

Source: artificialanalysis.ai/evaluations/...

18.02.2026 07:21 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Looks like Sonnet 4.6 is much less token efficient, which brings it close to Opus cost level. A bit disappointing!

18.02.2026 07:18 πŸ‘ 13 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

This is so awesome. The fact that I can turn my expert coding agent into a really good teacher by dropping some files in a folder is just wild.

17.02.2026 22:24 πŸ‘ 11 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Introducing Sonnet 4.6 Claude Sonnet 4.6 is a full upgrade of the model’s skills across coding, computer use, long-reasoning, agent planning, knowledge work, and design.

www.anthropic.com/news/claude-....

17.02.2026 21:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Sonnet 4.6 offers strong performance at any thinking effort, even with extended thinking off. As part of your migration from Sonnet 4.5, we recommend exploring across the spectrum to find the ideal balance of speed and reliable performance, depending on what you’re building.

Sonnet 4.6 offers strong performance at any thinking effort, even with extended thinking off. As part of your migration from Sonnet 4.5, we recommend exploring across the spectrum to find the ideal balance of speed and reliable performance, depending on what you’re building.

Ooh, I like this direction. I’ve been experimenting with keeping thinking disabled for straightforward tasks, mainly for speed.

17.02.2026 21:46 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I really wish model labs would start posting benchmarks on speed and token efficiency. Can’t we just measure the time/tokens to complete existing benchmarks?

Especially interested to see if smarter models can be faster on certain tasks by needing fewer tool calls.

17.02.2026 21:38 πŸ‘ 11 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

How can I see the lexicon for this? I’d like to use it to learn about the app’s features

17.02.2026 15:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Let’s make some virtuous software today

17.02.2026 07:43 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0