Γ‰lie Michel's Avatar

Γ‰lie Michel

@elie-michel

Research Scientist at #Adobe. PhD in Computer Graphics. Author of #LearnWebGPU C++. Creative Coding. Indie game. VFX. Opinions are my own. Writes in πŸ‡«πŸ‡· πŸ‡ΊπŸ‡Έ. https://portfolio.exppad.com https://twitter.com/exppad

438
Followers
227
Following
106
Posts
02.11.2023
Joined
Posts Following

Latest posts by Γ‰lie Michel @elie-michel

Preview
Real Time Multiscale Rendering of Dense Dynamic Stackings Dense dynamic aggregates of similar elements are frequent in natural phenomena and challenging to render under full real time constraints. The optimal representation to render them changes drastically...

Very nice approach! Using textured primitive is definitly a good idea to decorrelate the visibility information from the texture signal :)

(Plus, I love impostors/billboards, they remind my of my first long paper perso.telecom-paristech.fr/boubek/paper... -- shamelessplug)

04.06.2025 07:02 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It's not easy for sure, chicken and egg problem, which is why our message is at least as much for reviewers then it is for authors! Nobody is to blame individually, it's just sth we should collectively discuss.

03.06.2025 19:41 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I agree, but part of the problem is that what the "average PC in 5-6 years" looks like may indirectly depend on what we do in research. If we only test on these hardware, they'll indeed naturally become the ones people use. :)

03.06.2025 15:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
MOTIVATION
Graphical Processing Units (GPUs) are at the core of Computer Graphics research. These chips are critical for rendering images, processing geometric data, and training machine learning models. Yet, the production and disposal of GPUs emits CO2 and results in toxic e-waste [1].

METHOD
We surveyed 888 papers presented at SIGGRAPH (premier conference for computer graphics research), from 2018 to 2024, and systematically gathered GPU models cited in the text. 

We then contextualize the hardware reported in papers with publicly available data of consumers’ hardware [2, 3].

REFERENCES
[1] CRAWFORD, KATE. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.
[2] STEAM. Steam Hardware Survey. https://store.steampowered.com/hwsurvey
[3] BLENDER. Blender Open Data. https://opendata.blender.org

MOTIVATION Graphical Processing Units (GPUs) are at the core of Computer Graphics research. These chips are critical for rendering images, processing geometric data, and training machine learning models. Yet, the production and disposal of GPUs emits CO2 and results in toxic e-waste [1]. METHOD We surveyed 888 papers presented at SIGGRAPH (premier conference for computer graphics research), from 2018 to 2024, and systematically gathered GPU models cited in the text. We then contextualize the hardware reported in papers with publicly available data of consumers’ hardware [2, 3]. REFERENCES [1] CRAWFORD, KATE. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021. [2] STEAM. Steam Hardware Survey. https://store.steampowered.com/hwsurvey [3] BLENDER. Blender Open Data. https://opendata.blender.org

Ever wondered how badly we're all addicted to buying new GPUs in graphics labs?

Come see our talk at #SIGGRAPH2025 to discuss how we can collectively move "Towards a sustainable use of GPUs in Graphics Research"

with @elie-michel.bsky.social @axelparis.bsky.social Octave Crespel and Felix HΓ€hnlein

01.06.2025 02:12 πŸ‘ 50 πŸ” 20 πŸ’¬ 2 πŸ“Œ 1
Left: WGSL, GLSL and SPIR-V are possible inputs of naga. Right: MSL, GLSL? HLSL, SPIR-V and WGSL are possible outputs of naga.

Left: WGSL, GLSL and SPIR-V are possible inputs of naga. Right: MSL, GLSL? HLSL, SPIR-V and WGSL are possible outputs of naga.

Note that 'naga' is the equivalent tool developed by Firefox. It can easily be installed using cargo (the rust build manager): github.com/gfx-rs/wgpu/...

01.06.2025 10:43 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Left: WGSL and SPIR-V are possible inputs of tint. Right: MSL, GLSL? HLSL, SPIR-V and WGSL are possible outputs of tint.

Left: WGSL and SPIR-V are possible inputs of tint. Right: MSL, GLSL? HLSL, SPIR-V and WGSL are possible outputs of tint.

'tint' is the shader compiler developed by Chrome to implement #WebGPU. It has a nice command line interface, but so far there is no official build out...

Wait no more! I share here precompiled binaries of tint CLI: github.com/eliemichel/d...

01.06.2025 10:43 πŸ‘ 13 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

Important notes:
πŸ”ΉThis rewrite is *WIP*, refer to the main section (w/o "next" in the URL) for further chapters.
πŸ”ΉThis only works with Dawn for now because it is closer to what the v1.0 of WebGPU will be.
πŸ”ΉThe accompagnying code "stepXXX-next" is not up to date yet.

29.05.2025 09:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Screenshot of the Hello Triangle chapter

Screenshot of the Hello Triangle chapter

I've just realized something. It makes much more sense to have the "hello triangle" pointing upside down when learning #WebGPU!

πŸ‘‰ The ongoing "Next" rewrite of my guide reached the Hello Triangle chapter πŸ₯³ eliemichel.github.io/LearnWebGPU/...

29.05.2025 09:47 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Wilhem receiving the award on stage

Wilhem receiving the award on stage

πŸ…Honored to have been awarded at #Eurographics25 for our paper on #LipschitzPruning to speed-up SDF rendering!

πŸ‘‰ The paper's page: wbrbr.org/publications...

Congrats to @wbrbr.bsky.social, M. Sanchez, @axelparis.bsky.social, T. Lambert, @tamyboubekeur.bsky.social, M. Paulin and T. Thonat!

19.05.2025 09:54 πŸ‘ 25 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

Nice writeup from @mattkeeter.com inspired by our recent work on #LipschitzPruning!

15.05.2025 11:09 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

PS: I'll be in #Eurographics next week, feel free to get in touch!

10.05.2025 22:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Screenshot of https://eliemichel.github.io/LearnWebGPU

Screenshot of https://eliemichel.github.io/LearnWebGPU

Screenshot of my custom version of RenderDoc

Screenshot of my custom version of RenderDoc

Screenshot of the README of Slang x WebGPU

Screenshot of the README of Slang x WebGPU

New update post about the 🚧 Ongoing work! 🚧 in my LearnWebGPU C++ guide!

On patreon: www.patreon.com/posts/ongoin...
On Discord: discord.gg/2Tar4Kt564

Outline:
πŸ”Ή The LearnWebGPU guide
πŸ”Ή WebGPU-distribution
πŸ”Ή RenderDoc
πŸ”Ή WebGPU-C++
πŸ”Ή WebGPU spec
πŸ”Ή Dawn
πŸ”Ή wgpu-native
πŸ”Ή GLFW and SDL
πŸ”Ή Slang x WebGPU

10.05.2025 22:21 πŸ‘ 19 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Left: an input CSG tree and a much smaller pruned tree computed using our method.
Right: a rendered scene showing the number of active nodes per cell. Our method reduces the active nodes to less than 20 from the initial 6023 nodes of the input tree.

Left: an input CSG tree and a much smaller pruned tree computed using our method. Right: a rendered scene showing the number of active nodes per cell. Our method reduces the active nodes to less than 20 from the initial 6023 nodes of the input tree.

I am proud to announce our Eurographics 2025 paper "Lipschitz Pruning: Hierarchical Simplification of Primitive-Based SDFs"! With Mathieu Sanchez (joint first author), @axelparis.bluesky.social, @elie-michel.bsky.social, Thibaud Lambert, @tamyboubekeur.bsky.social, Mathias Paulin and ThΓ©o Thonat.

07.05.2025 13:37 πŸ‘ 45 πŸ” 18 πŸ’¬ 2 πŸ“Œ 1
Post image

Starting to track down the usage of #WebGPU resources during a frame in my custom #RenderDoc driver!

(Don't mind the usage field, it's a placeholder value for now)

04.05.2025 22:43 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yeah I totally had this issue as well ^^ I can add warnings indeed!

02.05.2025 17:03 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I don't think there is such a toggle, but that would indeed be useful here!

02.05.2025 14:38 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
WebGPU events in RenderDoc, with nesting of what happens between BeginRenderPass and RenderPassEnd

WebGPU events in RenderDoc, with nesting of what happens between BeginRenderPass and RenderPassEnd

Starting to nest events in the #WebGPU driver for #RenderDoc, how do you think I should handle these "WriteBuffer" that occur while encoding a "RenderPass"?

Because chronologically they are submitted before the render pass even though the API call occurs after.

02.05.2025 10:53 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Close-up on the captured WebGPU API calls

Close-up on the captured WebGPU API calls

Close-up on the captured WebGPU API calls

30.04.2025 07:41 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
A screenshot of RenderDoc where one can see calls to the WebGPU API

A screenshot of RenderDoc where one can see calls to the WebGPU API

Who would be interested in a version of #RenderDoc that captures and replays calls to the #WebGPU API (rather than calls to the underlying DirectX/Vulkan/Metal API)?

This is an early test that only lists the API calls, but already promissing! Will share when usable.

30.04.2025 07:38 πŸ‘ 29 πŸ” 3 πŸ’¬ 3 πŸ“Œ 0

It's very nice that you intend to talk about this with your students πŸ™ Don't forget that fossil fuels is not the only issue, rare earth materials (and the rate at which we renew hardware) and water consumption are huge problems too!

27.04.2025 11:38 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Google falling short of important climate target, cites electricity needs of AI Google, which has an ambitious plan to address climate change with cleaner operations, came nowhere close to its goals last year, according to the company’s annual Environmental Report Tuesday.

Google: apnews.com/article/clim... (Self report source: www.gstatic.com/gumdrop/sust... )

Microsoft: www.bloomberg.com/news/article... (Self report source: query.prod.cms.rt.microsoft.com/cms/api/am/b... )

Amazon self report: sustainability.aboutamazon.com/2023-sustain...

27.04.2025 11:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

A very compelling proof that something isn't going right is to look at the divergence between announced energy reduction plans and the reality of compute providers.

Data comes from provider's own annual reports (that they agreed on publishing a couple of years back, transparency FTW), links‡️

27.04.2025 11:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

This report of Wells Fargo is also very informative/worrysome and points to many interesting sources: www.wellsfargoadvisors.com/research-ana...

27.04.2025 11:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Global electricity demand from data centres
could double towards 2026

Global electricity demand from data centres could double towards 2026

The most commonly cited source about the growth of energy need for compute is this IEA report: iea.blob.core.windows.net/assets/6b2fd... (screenshot of p31)

27.04.2025 11:22 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
The Environmental Impact of Computer Graphics

@mvandepanne.bsky.social Here is the page of the workshop in question: eliemichel.github.io/Environmenta... !

27.04.2025 11:19 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I think it happens when one of the selected objects has a material whose diffuse color is not an image. I should try to clarify the error messages, could you share one file where it fails by any chance?

06.04.2025 13:41 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Marzia Riso

πŸ₯³ A huge congrats to all my co-authors! Marzia Riso (marzia-riso.github.io), @axelparis.bsky.social, @vdeschaintre.bsky.social, Mathieu Gaillard (www.mgaillard.fr), Fabio Pellacini (xelatihy.github.io) 8/8

03.12.2024 08:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Manipulation results on a cheese and a robot arm

Manipulation results on a cheese and a robot arm

Comparison with libfive solver

Comparison with libfive solver

The results: Our approach provides more expressive manipulation capabilities than solutions that do not augment the implicit function to help tracking down a point's identity!

More info and results on the web page! πŸ‘‰ eliemichel.github.io/SdfManipulat... 7/8

03.12.2024 08:33 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

The solution: We #augment the output of the implicit function so that it also returns a vector that represents the #identity of the point. This then enables using the conveniently called Implicit Function Theorem en.wikipedia.org/wiki/Implici..., giving in the end a simple formula. 6/8

03.12.2024 08:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
We need to find a common ground (center) across multiple variations of the same shape (left and right). We call this the Co-parameterization space.

We need to find a common ground (center) across multiple variations of the same shape (left and right). We call this the Co-parameterization space.

The challenge: The constrained point (the one we click and drag), a priori only exists in the #current version of the shape: it's only once we can track it down in other #variations that we may define its #derivative wrt. procedural parameters.

Not easy, when the entire shape is only #implicit! 5/8

03.12.2024 08:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0