New blog post is finally up: (Ab)using Shader Execution Reordering.
A bit of outside the box usage of SER (for better or worse).
debaetsd.github.io/posts/ser/
New blog post is finally up: (Ab)using Shader Execution Reordering.
A bit of outside the box usage of SER (for better or worse).
debaetsd.github.io/posts/ser/
lol I never saw that repo but seems somebody reversed engineered our file format :)
Not sure if there is anything tech in-depth remaining after the unity acquisition (and the codecs were hardly publicly documented anyway).
your paper actually references some of the founders ([HPLV12])
nice results! This is almost identical on how we used to do virtual texturing with GraniteSDK. Except it was a CPU-based custom codec (JPEG inspired with DCT/huffman/arithmetic/...) transcoding into DXT.
If you do it reversed (Belgium and NZ), these couple of weeks are actually the ones where you might be 'done for the day' before midnight :)
I wrote a blog post about mipmap level selection. pema.dev/2025/05/09/m...
New blog post! "Measuring acceleration structures", in which we will compare BVH costs on various GPU architectures and drivers and attempt to understand the details enough on AMD hardware to make sense of the numbers!
Reposts appreciated :)
zeux.io/2025/03/31/m...
We just posted a recording of the presentation I gave at DigiPro last year on Hyperion's many-lights sampling system. It's on the long side (30 min) but hopefully interesting if you like light transport! Check it out on Disney Animation's website:
www.disneyanimation.com/publications...
"objects that look the same"
These few words are carrying a lot of weight here :P this is either a 200 lines python script or a 2.5million LoC code base
my response to this either makes me a noob or a senior dev ππ€·ββοΈ
Me and my 700 chrome tabs agree that tabs are very useful π I just assumed people had multiple instances of the same program π€·ββοΈ maybe so one of them can crash without taking out the other tabs or maybe it is just a vfx thing where single ops like simulation/rendering/loading/β¦ can take minutes π
As cool as it looks, the main thought in my mind was βwhyβ? Like, would you actually expect people to craft multiple meshes simultaneous in different tabs? Is this a real thing (and am I simply too far removed from typical day-2-day art workflows to see the practical value)?
depends on how you implement it? I use a DXC include handler in my home rendertoy to get includes and those get injected into my βasset dependency graph managerβ (that is updated by change journals and/or importers etc). Works fine (though current impl might not scale to real production complexity)
You can use a custom include handler in DXC that collects the files that are included. After that it is fairly simple to build a full dependency chain (just some hashmaps often work fine since the number of shader files tends to be rather low)
It depends but often done in post I would say. Having a deep rendering/compositing pipeline, you can get insanely HQ DoF and so it give a lot of flexibility
I managed to arrive a day early on my interview for my very first job :) don't let it get to your head because at the end of the day, it is all about what you bring to the table