So let us show you just how *universal* #PET-MAD-1.5 can be. This is a movie of a parallel tempering simulation, with replicas from 300K to 3000K, of what we call a "Mendeleev cluster" - one atom each of every element from 1 to 102.
So let us show you just how *universal* #PET-MAD-1.5 can be. This is a movie of a parallel tempering simulation, with replicas from 300K to 3000K, of what we call a "Mendeleev cluster" - one atom each of every element from 1 to 102.
π’ New #preprint is out! Investigating the many flavors of last-layer #UQ, Moritz and π§βπMatthias propose a practitioners' guide on "how to train a shallow ensemble". TL;DR? for good calibration use NLL, include force, and optimize the backbone, fine-tuning for speed! ππβ‘οΈ arxiv.org/html/2602.15...
No Install. No Setup. Just Chemical Shift Predictions.
At shiftml.materialscloud.io we host the latest ShiftML3 in the web and you can predict chemical shifts of organic crystals for free in the web!
What a cool applet - running a universal MLIP directly from your webbrowser!
π’ PET-MAD is here! π’ It has been for a while for those who read the #arXiv, but now you get it preciously πΈ typeset by @natcomms.nature.com Take home: unconstrained architecture + good train set choices give you fast, accurate and stable universal MLIP that just worksβ’οΈ www.nature.com/articles/s41...
error plots for the PET-MAD-DOS model on different datasets
Anticipating π§βπ Wei Bin's talk at #psik2025 (noon@roomA), π’ a new #preprint using PET and the MAD dataset to train a universal #ml model for the density of states, giving band gaps for solids, clusters, surfaces and molecules with MAE ~200meV. Go to the talk, or check out arxiv.org/html/2508.17...!
Read the associated publication: pubs.acs.org/doi/abs/10.1...
Many thanks to our collaborators at the Laboratory of Magnetic Resonance (LRM) at EPFL:
Jacob Brian Holmes, Ruben Rodriguez-Madrid, Florian Viscosi and Lyndon Emsley
Funding:
Swiss National Science Foundation, NCCR MARVEL, ERC Horizon
First, we generate a pool of candidate structures and then select the one whose calculated chemical shieldings best match the experimental measurements. Because shielding calculations quickly become the computational bottleneck, machine-learning (ML) models can substantially reduce the cost.
Solid-state NMR spectroscopy is increasingly used for structure determination of organic solids. NMR-based structure determination differs from conventional diffraction methods:
We're introducing ShiftML3, a new ShiftML model for chemical shielding predictions in organic solids.
* ShiftML3 predicts full chemical shielding tensors
* DFT accuracy for 1H, 13C, and 15N
* ASE integration
* GPU integration
Code: github.com/lab-cosmo/Sh...
Install from Pypi: pip install shiftml
metatensor logo
metatomic logo
π¨ #machinelearning for #compchem goodies from our π§βπ team incoming! After years of work it's time to share. Go check arxiv.org/abs/2508.15704 and/or metatensor.org to learn about #metatensor and #metatomic. What they are, what they do, why you should use them for all of your atomistic ML projects π.
π DFT-accurate, with built-in uncertainty quantification, providing chemical shielding anisotropy - ShiftML3.0 has it all! Building on a successful @nccr-marvel.bsky.social-funded collaboration with LRMπ§²βοΈ, it just landed on the arXiv arxiv.org/html/2506.13... and on pypi pypi.org/project/shif...
A schematic of the functioning of a ML/QM hybrid framework
When you combine #machinelearning and #compchem, you need to start worrying at the QM details within your ML architecture. We use our indirect Hamiltonian framework and pySCFAD to explore the enormous design space arxiv.org/abs/2504.01187
Polar plot showing the errors of several machine-learning potential of different test sets. Smaller is better here!
Plots showing the evaluation time per atom for several machine-learning potentials as a function of the number of atoms in a simulation. Smaller is better
π’ PET-MAD has just landed! π’ What if I told you that you can match & improve the accuracy of other "universal" #machinelearning potentials training on fewer than 100k atomic structures? And be *faster* with an unconstrained architecture that is conservative with tiny symmetry breaking? Sounds like π§βπ
Header of the webpage showing the title ("Atomistic Water model for MD") and the authors (Philip Loche, Marcel Langer, Michele Ceriotti)
Happy to share a new #cookbook recipe that shocases several new software developments in the lab, using the good ole' QTIP4P/f water model as an example. atomistic-cookbook.org/examples/wat.... TL;DR - you can now build torch-based interatomic potentials, export them and use them wherever you like!
Feeling a bit lonely here ...