Another attempt, results quite similar to yours: gemini.google.com/share/df8d29...
Another attempt, results quite similar to yours: gemini.google.com/share/df8d29...
Possibly related: I found that LLMs are surprisingly bad at tracking who knows what at which point.
I agree, this is a problem which is easy to run into. But so far, I also found it easy to use AI to dig my way out of this again. I found prompts like "How do subsystems A and B interact?" and "What happens if [some special case] occurs?", with follow-ups, really helpful to get back on track.
I'm mostly using Claude Code, so that Claude can directly edit my TeX files, store and run Python scripts, etc. For simpler questions, I sometimes use the web interface.
Very thoughtful and interesting! Thanks for posting this.
Am I using Claude Code correctly?
And the discrete circle, also known as the square, has circumference 2*(discrete pi)*radius.
Which llm is this?
I used the "llm" command line tool in a loop, with some postprocessing. My llm is Claude Sonnet 4.5.
Ah, ignore me, I found the other post ...
Do you have a specific "static model" in mind? I guess one would assume a random number of publications. But then, i.i.d. random number of citations? Or random ages and random citation counts depending on this age?
I find it surprising how AI always comes up with the same set of names. I've had Claude generated names for some pet project, and I also got "Marcus Webb", "Elena Vasquez", and "Chen" with various first names. It also seems to love "Okafor".
The alt text seems to claim that "each corner piece has three faces of a single colour". The top-right corner in the picture does not have this property! But of course already one such corner, like the yellow one, gets in the way of "solving" this like standard Rubik's cube ...
I think the best strategy here would be to compare the changed version to the original *without* using AI. If it's TeX or MarkDown files, maybe "diff" can help? This way you can still have AI do the work, but you can also be confident that it didn't add "the lecturer is stupid" to your notes.
First time that I have used AI to help with a "real" proof: www.seehuhn.de/blog/maths-w...
It was only today that I learned that "altogether" has only one "l"!
10%
Guess the country!
But what is a "PDS"?
Happy Easter, everybody!
Finally, I'm not sure whether Zdzislaw Brzezniak is a celebrity (he has in the past given talks in our probability seminar), but he has four "z" in his name. www.york.ac.uk/maths/people...
w.wiki/Gbyw
If you press the play button the on the left, you get 30 movie actors with at least three "z" in their names, sorted by how many languages their wikipedia page has been translated to. Say hello to Zazie Beetz, Janusz Zakrzeński, Jerzy Radziwilowicz, and their friends!
I believe the correct use of LLMs here would be to ask the llm to write you a wikidata query to find the people. Hallucination proof, not affected by llm counting affections, and probably gets more names.
Also, for comparison, here is what Claude thinks about celebrities with "Z" in their names.
Not the answer to your question, but are you familiar with this excellent paper? arxiv.org/pdf/1003.6064 . To me, the hightlight is footnote 3.
Dame Judi Dench walk (wade?) earlier today.
An easy visual puzzle (ǝsnO ɹǝʌᴉɹ ǝɥʇ uᴉ sƃuᴉןᴉɐɹ pǝƃɹǝɯqns ʎןןɐᴉʇɹɐԀ).
True. But how do we know that the human brain just isn't one of these, too? The only arguments I've heard for the brain being qualitatively different all sounded quite similar to "I just know it".