Effective prompt-coding is just Football Manager for white collar management
Effective prompt-coding is just Football Manager for white collar management
A perfect xG system would be "what's the highest probability moment each play achieved?". That may be at the time of the shot, or moments before the ball hits your butt.
Pronunciation note: you're gonna say "Greetoosh", but it's more like Cheetos with two O's. "Gree-toos".
Bar charts should always start at zero...
Recommended retail price: bartering system.
time to have a big glass of red wine and post a series of overly niche hot takes
It'll break often, and users will be confused. There's a lot of care taken in professional software to edge cases and useability. If you design to your specific workflow, it's unnecessary to address most edge cases, and you know how it works instinctively. It's a completely different mindset.
I'm not sure about that. I'm inclined towards another trend: that software sales will just decline. You'll get good software (Claude Code) that has good code, and then mostly buy tokens to make bespoke sw that isn't good but works for your needs. Maybe this is just semantics, but I think it matters.
Good code is good code. It's just that in many (most?) cases "good code" isn't really needed - especially when you're not selling the software.
You should follow @simonwillison.net. he makes data tools for journalists (Datasette) and is one of the all around best thinkers on the use of LLMs. I think he has talks on YouTube on that exact topic.
BTW, It works as a trilogy, so I don't feel like I wasted time.
The protagonist. I was also initially addicted, but after the third book I was out. I'd say the second book is the best.
It gets worse...
a classic forum interface for bluesky
a classic forum interface for hacker news
a classic forum interface for the new york times
coding assistance lets you fast-track important projects that improve your life, such as reformatting every single site you read into the old vbulletin 3.x default template
Kind of funny that they didn't believe in Seguro for the longest time. Now I'm sure they'll party like they always knew he'd be elected ๐คฃ
In the Portuguese presidential election, very clear victory of the centre-left candidate Segura against the radical right Ventura.
Better than I thought ๐
I'd actually like that. He's too out of touch for club level, but for NT he's probably fine. I feel like everyone's waiting for Ronaldo to retire to hire Mou.
Mourinho commented a match on TV during Euro2004, and the midfield was his Porto lineup - Costinha, Maniche and Deco. He was so happy everytime they made a tactical foul and avoided a yellow card. Felt like a proud dad.
The zoomed version without the referee doesn't make it look better... The problem is that guy that followed Amad to the corner of the box, left Dalot and Mbeumo completely alone.
Hoping for at least 65. I like the fact that no one on the center-right supported Ventura in the second round, but I get the feeling that MAGA money will flow to these fuckers soon, so it's important that voters also reject him.
Players are detected automatically using object detection model, but the pitch overlay requires manual tweaking. Basically made this just to show how much space M'Beumo had on that goal...
What a few hours of vibecoding can to these days - and I'm not using Claude Code, just Google's antigravity with mostly its cheaper flash model.
I'm old enough to have spent afternoon on Stack Overflow instead.
I can hear matplotlib's coordinate system crying...
Probably a bit late, but I'll try anyway:
Samu Aghehowa
FC Porto
Radar for this season vs last.
That's kind of scary...
Not his shadiest this window...
Of course, OpenAI knows how fair these assumptions are, and we don't. But I'm convinced that they know the LLM arms race is unsustainable (if they stop training new models, open source catches up), and that's why they *need* AGI to make the business work. In other words: It's a bubble.
This is really interesting. With reasonable assumptions, the margin on GPT5 inference wasn't enough to offset GPT5 R&D/Training.