Very cool work!
Very cool work!
PaSh finally got a PyPI package (pypi.org/project/pash/) ๐
Install with `pip install pash` and start parallelizing your ๐ซต shell scripts today!
If you (or anyone you know) would like to do your PhD in a vibrant city with great weather, next to the sea, and the mountains, make sure to apply to UCLA and mark my name as a potential advisor :)
I am hiring 1-2 PhD students this cycle at UCLA. My group works on problems in the intersection of systems and PL, with some recent topics of interest being the correctness of black-box side-effectful programs, Python script optimization, and systems for cloud computing!
Stephen (@stephenmell.bsky.social) is also looking for postdoc opportunities so reach out to him if you are working in the intersection of PL, ML, and systems and are looking to hire! He is amazing!
You can find a lot more information on our work, proofs, details, additional evaluation, and more ๐ฎ in our paper that Stephen (@stephenmell.bsky.social) presented at OOPSLA a few weeks ago. You can also read it here: dl.acm.org/doi/pdf/10.1...
We prove correctness by proving that the semantics is confluent ๐ฐ , meaning that even if different execution steps are happening with our opportunistic evaluation, they will all lead to the same final program state.
But why is that correct you may think to yourself? ๐ค Thanks for asking, I wouldnโt have thought to tell you!
We have proved that our approach is โจ correct โจ, meaning that the behavior of the program does not change when executed with opportunistic evaluation.
We then use everyoneโs favorite technique (Church โช encodings) to desugar control flow of a high level language down to our calculus enabling streaming and parallelization for programs that involve LLMs and other ML models.
Our key insight is an evaluation strategy that is neither totally strict, nor totally lazy, and can automatically exploit parallelization and streaming. We build this evaluation strategy on top of a core calculus with first class support for external calls (like ones to an LLM).
To address this, we built a novel ๐opportunistic ๐ evaluation system that automatically parallelizes independent calls and streams their results! Our system reduces the execution time of this script from 51s to 9s, and the first output arrives on the terminal after only 1.3s!
In the example script in the image above, all calls to the LLM are completely ๐๐๐ sequentialized ๐๐๐ and there is no streaming. This script takes about 51 seconds to execute, and it took more than 7 seconds to start seeing output written in the terminal.
Are you making calls to LLMs and ML models in your code? Is your code awfully sequentialโข? Is your code unable to stream data?
Look no further! Stephen (@stephenmell.bsky.social), Osbert, Steve, and I came up with a solution for you! Read further if you suffer from the above issues โฌ๏ธ
The worldโs largest federated conference in logic & automated reasoning, FLoC 2026 (www.floc26.org), will be in Lisbon!
Thousands will attendโan excellent chance for companies to showcase their presence. Interested in sponsoring? Details www.floc26.org/sponsors/FLo...
There is some alignment between these two threads indeed I think! Happy to chat whenever :)
In this paper we don't yet build such guardrails but primarily talk about what these guardrails should/could look like! We also focus on guardrails at the systems and PL level, instead of improving the agents. The paper is short so please read it and share your thoughts with us :)
People use LLM agents to interact with (access/modify) their system and external services. We (@bimalraj.bsky.social, @samkumar.bsky.social, and Sai) think this can be a bit dangerous ๐ฃโฒ๏ธ๐ฅ so we wrote a paper (arxiv.org/pdf/2506.13028) on system guardrails for such agent-driven interactions!
An article written just came out covering our latest work on semantic-driven static analysis for shell script correctness (www.infoworld.com/article/3977...). There will also be a presentation at HotOS on Thursday (sigops.org/s/conference...) for anyone who is there!
๐ ๐ ๐ ๐
PLMW@PLDI'25 is now accepting applications: pldi25.sigplan.org/home/PLMW-pl...
Deadline: April 10, 2025
PLMW an excellent place to learn about exciting PL research, from the ground up, and to find your PL friends!
Please apply!
I am looking to recruit a PhD student (fully funded at UK home tuition rate) to work on automated testing and verification of machine learning compilers and runtimes! Deadline: 30th April. Please spread the word! Details here: www.doc.ic.ac.uk/~afd/PhD-Adv...
The data includes statistical information about the shapes and sizes of real-world Durable Functions applications. We hope that it will provide new insights to the community and indicate directions for future work!
An extended version of our Netherite paper with folks from Microsoft Research was just published to the VLDB Journal (link.springer.com/article/10.1...). The key addition is Section 8, which contains real usage data for Durable Functions. See more below โฌ๏ธ
Text from student paper review writing: "Could have named it MuStash instead of MuCache"
A student in my paper reading class at UCLA identified a serious missed opportunity in our MuCache paper:
Hey all! Who's at POPL? If you're here and looking to do a PhD at Princeton, let's chat!
Consider following the โจnewโจ and official PLDI account on Bluesky @sigplan-pldi.bsky.social
SoCal Programming Languages and Systems is back and will be @ucsd_cse in February!
Submit your abstracts!
socalpls.github.io
@ranjitjhala.bsky.social @manu.sridharan.net @cristalopes.bsky.social
Reposts appreciated!
This is joint work with @akisg.bsky.social , Paul Loh, Shuyue Wang, Justin Qiu, Max Demoulin, and Benjamin Lee.
Stay tuned for a preprint ๐
Finally, Akis ( @akisg.bsky.social ) is applying for PhDs this cycle so snatch him while you have a chance!!
The name means city gate, representing the controller rejecting requests (en.wikipedia.org/wiki/Raj%C5%...). There is also a "debatable" connection of our system with a Kurosawa movie from the 50s (en.wikipedia.org/wiki/Rashomo...), but finding it is left as an exercise for the reader.
Plot showing the tail latency and goodput of Rajomon against several state of the art systems.
The expressiveness of our market-based scheme can provide significant benefits in the case of services that support a variety ๐ of input requests and deeper service graphs ๐ Rajomon reduces tail latency by 78% and increases goodput by 45% against SOTA for high load!