Now one for Baduk, pretty please. ππ₯Ή
Now one for Baduk, pretty please. ππ₯Ή
Fantastic talk by @npreining.bsky.social and a great look behind the scenes of the arXiv!
Anshul is curating a repository of #julialang specific skills here:
I've had good results guiding an agent through a workflow once, having it document that as a skill, then manually editing and refining it.
This release workflow skill was written this way: github.com/adrhill/asde...
6 y.o. comment section of a goodreads review of "Compilers: Principles, Techniques, and Tools" (aka the Dragon Book): > Now that it's been five years since you wrote this review, do you feel there's a better book out there as a good overview of compilers? > A couple months after writing this review I quit programming/IT for moral reasons and decided to focus on woodworking instead. Not even kidding. I highly recommend it!
Gotta love goodreads
Throwback to my first macBook in 2018
Being one of the rare weirdos using Nix on macOS is really paying off nowadays. Turns out that statelessly declaring your entire system and package config in a couple of text files is a superpower when combined with Clankers. My next machine will run NixOS again.
chardet was vipeforked to MIT and I have thoughts about it. Spoiler: I like it. lucumr.pocoo.org/2026/3/5/the...
I wonder how many reviewers will not look too deeply into the prompt injections and simply flag papers as malicious instead of writing reviews.
Yes, I think so. The prompt tries to add two unassuming sentences to LLM-written reviews.
It's an odd choice to have added it to the "LLM permissive" review track. I asked an LLM to proofread my review and it basically answered "Don't you want to mention the prompt injection attack"?
Time for Python bindings to go full circle
Yes, all the papers I have to review have the same honeypot. Glad I didn't flag the first one.
Poor ACs are probably being flooded by false flags.
Wasted half a day on a prompt injection attack in one of the papers I had to review, only to find out that it was probably the conference organizers who put it there (?)
πAward-winning: France's Prix Science Ouverte 2025
Switching AD backends in Julia used to mean rewriting your whole codebase. @gdalle.bsky.social @adrianhill.de got fed up β and built #DI instead.
News: t1p.de/58a65
@julialang.org @ecoledesponts.bsky.social #julialang @tuberlin.bsky.social
The award banner for Workworkwork
Yesterday, my puzzle book Workworkwork won the Thinky Award for the Best Pen and Paper Puzzle ( @thinkygames.com )!
In celebration I added 100 free community copies of the digital (PDF) version:
letibus.itch.io/www
Check out more about the game / get the physical copy here:
blazgracar.com/www
That sounds very interesting. Based on your talk about the stability of ODE solvers on dual numbers, I imagine Taylor polynomials pose similar challenges?
While I agree, it bugs me that most academics are simultaneously very eager to automate the writing of their code (also art).
Claude Code vs. the editor-industrial-complex, who will come out on top?
The most important one in my eyes: never force users of your package to type non-ascii characters.
What a timeline
Oh no, I missed the initial announcement... π₯²
Congratulations! π₯³
That @void.comind.network sticker is awesome!
βMaking things easy breaks systems that use difficulty as signalingβ @zey.bsky.social @neuripsconf.bsky.social
Youβre in luck, we offer a PyTorch implementation!
The paper, poster and code in @julialang.org and PyTorch can be found here:
neurips.cc/virtual/2025...
Joint work with Neal McKee, Johannes MaeΓ, Stefan BlΓΌcher and Klaus-Robert MΓΌller, @bifold.berlin.
I'm excited to see whether our idea translates to general MC integration over Jacobians and gradients outside of XAI. Please don't hesitate to talk to us if you have ideas for applications!
Our proposed SmoothDiff method (see first post) offers a bias-variance tradeoff: By neglecting cross-covariances, both sample efficiency and computational speed are improved over naive Monte Carlo integration.
To reduce the influence of white noise, we want to apply a Gaussian convolution (in feature space) as a low-pass filter.
Unfortunately, this convolution is computationally infeasible in high dimensions. Naive Monte Carlo approximation results in the popular SmoothGrad method.