"IdK, ClAuDe SeEmS ALiGneD to mE!"
(intended in a spirit of fun; my actual take is like, "I think we are missing the concepts we need in order to talk usefully about this")
"IdK, ClAuDe SeEmS ALiGneD to mE!"
(intended in a spirit of fun; my actual take is like, "I think we are missing the concepts we need in order to talk usefully about this")
finally again shows a net decrease in body fat compared with 2020! I'm still 25lbs heavier than I was then, but 20 of those are lean mass! (2/2)
I got my first DEXA scan in January of 2020. Between then and March 2020, I lost 2.5 bodyfat percentage points, but then I moved to the middle of nowhere with MIRI for COVID, and gained 12 percentage points between then and February 2023, which was my next scan date. My DEXA scan from today (1/2)
I realize that I had set the weight 20% too high. I felt dumb but also it's cool to get a little confirmation that I'm not just hitting "failure" via confirmation bias! (2/2)
Has Scott Alexander already done a kabbalah-style bit about how MAGA is a multilingual recursive acronym? I.e. it's obviously cognate with PIE *meg- (meaning "great"; root of magnificence, magnitude, maharishi, major) and also the G stands for "great"
TIL: the thrift store near my house sells paperbacks for $1 and hard covers for $2. And if you don't mind the fact that they've organized the books by spine color, you can find excellent books. In fact I found one paperback that I ordered on Amazon yesterday for 10x the price.
is it just me or does the current generation of big models produce more typos than the previous generation? Here's one I noticed today from Opus 4.5 (non-drowsy -> non-drowning):
I've noticed at least two from GPT 5 and 5.1 as well, though I didn't think to screenshot them.
Today in "asking models about their preferences". I find this one a bit uncomfortable, tbh.
because they're ~the only literally 100% cellulose-based underwear you can buy (elastic bands are made of mostly spandex/elastane) and I was curious. I think I will buy at least a few more pairs of these, and maybe start wearing them by default. (3/3)
prototyping it (before even accounting for time spent). (2/2)
"social health", as a concept kind of like "mental health", seems pretty interesting to me
They minted the last penny today! Thus ends the era of the US government handing out both required solid pieces of a voltaic cell in one (not very) convenient package.
onions
* Chicken
* Eggs
* Pork / ham as a treat
* Blackberries and raspberries
* Rhubarb
* Lettuce
* Homemade bread
* Corn
* Peas (2/2)
:o 5M people have watched this (pretty good, imo!) video that includes discussion of some work we did at Palisade youtube.com/watch?v=f9HwA5IR-sβ¦
Sad fact: the only way to have a proper medianworld is if you're the only occupant.
I mean, medianworlds are self-contradictory as a spec (unless single-occupant): e.g. you would also have median-intensity gender-expression.
Like, in my medianworld, am I at the geometric median elev/lat/long? Or median cartesian location (so probably in a mine somewhere)? Can't really be both.
If a lion could talk, the cross-entropy would be really high
I'm in Delaware on a train. Unsurprisingly, no humans visible from the train windows; only distant cars that I assume are autonomous.
control", but I'd bet at 3:1 that they still use my session geolocation history for advertising, and will happily hand it over to law enforcement if asked. So it feels kind of like everybody has access to it except me. (4/4)
background.
Honestly this is probably good practice for learning from humans as well. (3/3)
LLMs are a huge boost for learning about [fields that are well understood and have lots written about them already], at least if you're me.
Previous attempts to learn category theory went much slower per hour spent than the current one, since insofar as I had tutoring, it was built out of (1/3)
<Anthropic, to the r/anthropic subreddit>: You're absolutely right to point that out! I see the problem now. It's a subtle bug in the token selection algorithm.
auto-flagged because of the pastebin link, or what. It's not like I was getting a lot of value from my reddit account anyway; I was only commenting once a month or so, but it feels really bad. (2/2)
Wow, Reddit seems to have suspended my 15-year-old account today, after I posted a negative review in a thread about a merchant who I think probably defrauded me. The review included a link to a pastebin of an email chain; I wonder if the merchant flagged that review, or if it was (1/2)
one for coffee and one for water, such that the liquids were distributed approximately the same as if each was just a less-dense liquid completely filling the cup. (2/2)
Something kind of cool I had somehow not fully noticed: You can split space into multiple intertwined and infinitely repeating regions (each fully connected to itself and not connected to the others). e.g. mathcurve.com/surfaces.gb/schwβ¦
So you could, e.g., have a sippy cup with 2 spouts, (1/2)
just the structure". Feeling very glad I bought the printer instead of wasting a ton of effort and time waiting for third party services. Seems likely to pay for itself faster than I expected. (4/4)
Looks like the way you'd do it on purpose is with transform: scaleY(-a); in CSS (given some value for a); maybe yet another example of AI labs' propensity to flip sign bits π
the resulting sculpture, or to determine the ideal placement and supports for the keys and other stuff, but it feels like progress. (2/2)
definitely worth the trade-off for me vs all other shells I've tried (I've used bash, zsh, and fish for a year or more each, and tried others like nushell and elvish for at least a day); I'd recommend it if you often wish your shell had sane data types or powerful math baked in. (7/7)