The best time to plant a tree is 20 years ago.
The second best time is three hours before the paper deadline.
The best time to plant a tree is 20 years ago.
The second best time is three hours before the paper deadline.
Godspeed Michael.
Fantastic video from @SciShow about our work that turns any shape into fair dice:
youtu.be/-gp7AbYD9NI?...
Get all the details on Hossein Baktashβs page here: hbaktash.github.io
Hit me back in about 400k years*.
(*Approximate age of man-made fire.)
β¦point of a flow: itβs obtained by minimizing the (square of the) βenclosed volumeβ, plus a regularity term that prevents self-intersections.
So, when gradient flows are concatenated, the eversion follows a βUβ in the energy landscape rather than a ββ©β
I didnβt look much into the history of midsurfaces for this eversion, but am curious to know what has been said by Gardner and others. This one is different in spirit from midsurfaces used for sphere eversion (like Kusnerβs halfway surface) in the sense that itβs a stable rather than unstableβ¦
Very happy to see that NVIDIA is still making demos. π©ποΈ
Out of curiosity, did you consider (or try) GLB/GLTF? (Also supported by Finder viewer.)
Holy crap. What? Why? Who did that� Amazing.
βFair diceβ might make you think of perfect cubes with equal frequencies (say, 1/6 on all sides) π²
But βfairβ really just means you get the frequencies you expect (say, 1/4, 1/4 & 1/2)
We can now design fair dice with any frequenciesβand any shape! π
hbaktash.github.io/projects/put...
Nicely produced clip by Matt Wein and Marylee Williams about our recent dice design project at @scsatcmu.bsky.social and @adobe.com
youtube.com/shorts/jD0ag...
π² π₯ π πͺ
Tangent-point energy works for (2).
To incorporate (1) I might (strongly) penalize the distance from each data point p to the *closest* point on the curve. This encourages at least one point of the curve to pass through each data point, without pulling on the whole curve.
Thanks for the thought-provoking example. π
Reminds me of the Kahneman and Tversky experiments (βSteve is more likely to be a librarian than a farmer.β) If LLMs are trained on human-generated text, it doesnβt seem reasonable to expect them to be smarter than the average text-generating human. (Though they sometimes are anyway.)
On the other hand, I was too dumb to recognize the subtlety on first glance. So maybe the model is βjust as bad as a human?β
So, in the the absence of any priors or additional information, 1/3 is a reasonable-ish approximation. But I agree it would be far better if the model simply said βthatβs hard to answer because there are many ambiguous factorsβ (as I have).
This oneβs not so clear cut: βbabyβ is an ambiguous age range, and a baby can be a twin or triplet, born in any order. Even a newborn could have younger step siblings in rare cases.
Weβre also presuming itβs a human baby, whereas other species have different life spans.
Not seeing it. Whatβs wrong with this answer? (There are six possible permutations, but the other two siblings are interchangeableβ¦)
I adapted Unicodeit! (See the acknowledgment section on GitHub; also meant to mention that in the footer).
I had been using your website for years, but wanted something more integrated.
Thank you for contributing to open source. π
I got tired of mashing together tools to write long threads with π«π’ππ‘ ππππππ‘π‘πππ and β³Ξ±β ββso I wrote Laππ€πππ‘!
It converts Markdown and LaTeX to Unicode that can be used in βtweetsβ, and automatically splits long threads. Try it out!
keenancrane.github.io/LaTweet/
(More seriously: if the geometry of the apples was well-captured by the artist, and the color is unique to that geometry, I would be willing to bet the answer is βyes.β)
If it began life as a drawing, is that question even well-posed?
Oh, you wrote a book on this stuff. I guess I didn't need to be quite so didactic in my response! ;-)
(But I take your point: it's hard to get all these different nuances across precisely in diagrams. That's why we also have mathematical notation to go along with the diagrams! :-) )
Well, f maps *any* point of the data space to the latent space, and g maps *any* point of the latent space to the data space. I.e.,
f : ββΏ β βα΅,
g : βα΅ β ββΏ.
The point x is just one example. So it might in fact be misleading to imply that f gets applied only to x, or that ends only at xΜ.
P.S. I should also mention that these diagrams were significantly improved via feedback from many folks from here and elsewhere.
Hopefully they account for some of the gripesβif not, I'm ready for the next batch! π
bsky.app/profile/keen...
Of course, there will be those who say that the representation diagram is βobvious,β and βthat's what everyone has in their head anyway.β
If soβ¦ good for you! If not, I hope this alternative picture provides some useful insight as you hack in this space. π
[End π§΅]
If you want to use or repurpose these diagrams, the source files (as PDF) can be found at
cs.cmu.edu/~kmcrane/Aut...
(Licensed under CC0 1.0 Universal)
Likewise, here's a simpler βimplementationβ diagram, that still retains the most important idea of an *auto*-encoder, namely, that you're comparing the output against *itself*.
Personally, I find both of these diagrams a little bit crowdedβhere's a simpler βrepresentationβ diagram, with fewer annotations (that might anyway be better explained in accompanying text).