My team just open-sourced our internal implementation of the OpenPBR BSDF from our path tracer: github.com/adobe/openpb...
My team just open-sourced our internal implementation of the OpenPBR BSDF from our path tracer: github.com/adobe/openpb...
Real-time Rendering with a Neural Irradiance Volume
Delivering fast inference (~1ms/frame), a tiny memory footprint (1-5MB), 10x quality gains over probe-based methods @ equivalent mem budgets, bringing neural rendering within reach of consumer hardware without RT or denoising.
arnocoomans.be/eg2026
It's being worked on! Details are still being shaked out.
Prerequisites have starting landing in the spec and in dawn (Chrome). E.g. latest Chrome release now supports textures and samplers in let bindings (variables).
Some type of buffer reinterpreting scheme is also being debated.
So... back to the drawing board? Maybe copying what Doom DA did and cluster lights in world-space rather than any sort of RIS-based scheme.
No matter what your RIS output is, you have to write it back to the cache. You can't conditionally do that and stay correct.
So scrapping that and trying to think of other schemes, I'm basically just reinventing ReGIR (except I don't want to do ReGIR since it requires many visibility tests).
Update on bsky.app/profile/jms5...:
Turns out my code was bugged, and I never actually wrote to the cache...
Fixing that reveals that the whole scheme was wrong anyways - conditionally replacing cache cells with RIS outputs does not give an unbiased/correct output.
Only thing I'd be worried about is making shader writing _too_ accessible - I don't want people making a million pipeline variations instead of 20 uber-ish shaders.
Thank you - this is very interesting!
I know our users would love if we did this with Rust, so that people don't have to learn WGSL :P
But the basic idea of a general shader skeleton, and then allow users to compose pieces to modify inputs/outputs between stages of the skeleton seems good.
Anyone have examples of Material APIs they like in an engine, or links to good talks on this topic?
Right now Bevy materials let you reuse host code (texture/buffer bindings), but customizing the default material is essentially "go to github and copy paste the entire shader". Not very modular.
π Unleash your creativity with GPU Zen 4!
π Dive into cutting-edge techniques from industry experts to tackle complex graphics challenges and elevate your projects.
Available now on Amazon!
π Standard Edition: a.co/d/0cvhJuSU
βοΈ Deluxe Edition: a.co/d/07t0MZGz
#GPUZen #GameDev
Btw code (minus visibility weighing and alias table building) is open source if you want to try improving it github.com/JMS55/bevy/b...
Maybe cache the result of the 32 sample RIS in the cell and use that, so we're not doing the expensive RIS per-pixel?
Message me if you have ideas!
For GI, I don't really have a good idea of where to go next. Perhaps just ReGIR after all? I know Tom Clabault got good results tomclabault.github.io/blog/2025/re..., but it's fairly expensive.
For ReSTIR DI, I next want to try feeding the traditional froxel-based light clustering into the initial RIS. Could help with many-lights.
So yeah, total bust. I was hoping to get a good sampling distribution I could feed into ReSTIR DI, and switch the irradiance caching part of the world cache to not store DI, and do NEE per-pixel for GI instead of immediately terminating in the cache - which would massively prevent light leaks.
Final result... the cache (left) has higher variance than just brute-force 32-sample RIS (right). Complete failure :/
5a. If in cache, update weight and estimated visibility
5b. If not in cache, is visible, and contribution larger than one of the 8 lights already in the cache, replace it
6. Generate an alias table of the 8 lights and their final weights to accelerate per-pixel sampling in the rest of the renderer
1. Do RIS over 8 random lights
2. Separately, do RIS over the up to 8 cached lights
3. Combine the reservoirs of 1. and 2., capping the random reservoir's weight to 25% of the other reservoir's weight
4. Test visibility of the final sample
With just a few lights, it's not a big deal. With 441 lights like this scene, pure random sampling becomes more of a problem.
My idea is that I would extend my existing spatial world cache structure to also store the 8 best lights it has found so far, in addition to cached irradiance (DI+GI).
Several different parts of Solari (realtime pathtracing) need to sample direct lighting:
* Diffuse GI (which is already cached in world space)
* Specular GI pathtracing (when it can't terminate in the world cache)
* Per-pixel ReSTIR direct light would like a better initial distribution than random
π§΅ I've been experimenting with caching the best lights in world space to improve NEE sampling. Inspired by ReGIR, MegaLights, and www.yiningkarlli.com/projects/cac....
Thank you, kram looks pretty good! I think it's only missing BC6H, but I can add a second library to handle one format, I don't mind that too much.
Agreed on there not really being a great all in one option.
For the Bevy game engine, I've been evaluating different options, and there doesn't seem to be one single tool that can support every platform/SDR + HDR/supercompression/etc. github.com/bevyengine/b...
Do you have any suggestions for what would be good to use?
Really cool to see that AVIF and runtime GPU compression work so well!
Spark is not freely available though, right? (which is totally understandable - it's a ton of work!)
I've put together an updated version of the Sponza scene with uncompressed PNG and compressed AVIF textures. I wrote about the process and compared the results against KTX.
www.ludicon.com/castano/blog...
#webgpu #web3d #sparkjs
I'm finally writing up how Nanite Tessellation works. The first few blogs posts are up. More will be coming.
graphicrants.blogspot.com/2026/02/nani...
If you're interested in taking things further, I have an open source WGSL PT that uses HW RT as part of the Bevy game engine, and I think your experience would be super valuable. Feel to reach out if you want to chat!
jms55.github.io/tags/raytrac...
Awesome job! I always love seeing more people experimenting with realtime pathtracing.
New post on how to do real-time diffuse global illumination using surfels and #WebGPU: juretriglav.si/surfel-based... #threejs