I have juste released a pre-print ๐๐
๐ ddonatien.github.io/mopt-website/
I have juste released a pre-print ๐๐
๐ ddonatien.github.io/mopt-website/
Turning videos into 3D humans has never been easier or more accurate. MoCapade 3.0 improves everything over 2.0 while adding 3D camera tracking (and output) and multi-person capture. Try out the new state-of-the-art. Of course, there's more to come.
Surely not*
A good point could be made here about regulating the random dopamine providers that pop up everywhere (from Las Vegas to YouTube, Insta and TikTok). But why follow the route of stigmatisation based on a few anecdotal examples?
I don't know the context of writing this, but it seems very shallow and misguided. Is there any actual data behind this, or is it just rambling?
Sure, some people are dumb and gamble, but they are surely the majority. Is it really the prospect of money that drives gambling?
In Korea, traffic lights stop the whole intersection at once, whereas in France it's only a few lanes.
I just shared my code for my latest paper, "GARField: Addressing the visual Sim-to-Real gap in garment manipulation with mesh-attached radiance fields"
Meet me Wed 11th, 10:30 at #ROBIO2024 for the oral presentation!
๐-> ddonatien.github.io/garfield-web...
#robotics #garment_manipulation
I guess they could also be classified as photogrammetry technology as it approximates 3D geometry from 2D images.
Super exciting work, Pr. Gupta. Regarding the real2sim approach, the paper states, "Digital twins are obtained
directly from real-world videos [..] using photogrammetry methods such as Gaussian splatting and neural radiance fields". Do you use both? Only these 2? Why not settle for only one?
Very insightful paper, thanks. In the same vein, I found this paper (I'm not related to the authors) an interesting read: arxiv.org/abs/2310.04411