Deleting Signal now that I know it's compromised. Find me in AOL Instant Messenger Gaming Chat.
A series of Gemini reasoning summaries: - "Verifying Model Existence" - "Clarifying Simulated Reality" - "Confirming Future Scenario"
Poor Gemini, man. It must be exhausting to be continually jump scared by any mention of a too-big version number, or god forbid the calendar year. How many tokens are collectively spent on its mini-existential crises?
Dhs lies
After all this is over, we need to pass a law that makes social media posts from official agency accounts like this subject to perjury, equivalently to testimony before Congress. It is too destructive to the country to let them keep lying and lying and lying without any possible legal remedy.
Call that auto-ouroboric asphyxiation? (Sorry. Im sorry. Im trying to remove it.)
There's this, which was trained on Objaverse for a lot of GPU hours and might meet some people's definition: www.zhaoningwang.com/PartUV/
behold, the metaverse
Pairwise L2 distances between text features: ('running', 'run'): 21.87 ('running', 'walk'): 22.49 ('sit on chair', 'sit'): 30.49 ('sit on chair', 'throw chair'): 22.92 ('wave', 'waving'): 28.00 ('wave', 'surf'): 24.61 ('turn left', 'turning left'): 17.06 ('turn left', 'turn right'): 12.75
I was cautiously optimistic SIGLIP2 might surprise me here... alas! Even siglip2-giant-opt-patch16-384 is mega-wonky.
the key which unlocks Demis' entire deal is that he used to work for Peter Molyneux
Excerpt from chapter 1, with a block diagram labeled "Figure 1.2 - Machine learning: a new programming paradigm," contrasting two boxes labeled "classical programming" and "machine learning." The "classical programming" box takes "rules" and "data" as input, and yields "answers." The "machine learning" box takes "data" and "answers" as input, and yields "rules."
I really enjoyed this! I also discovered that if you star the LanceDB repo on GitHub, you might get LinkedIn requests from their sales team, which is not a funnel I had previously considered. 🫠
FWIW I just remembered where I first saw that diagram: @fchollet.bsky.social's Deep Learning with Python.
Headline with man smiling: "Heartwarming: The Worst People You Know Are All Fighting" (Variation on classic Clickhole "Heartbreaking: Worst Person You Know Made a Great Point" post)
rev. howard arson @theophite.bsky.social · 2mo i remember writing stuff in 2019 which was just, like, "we have to figure out how to make this not ruin the internet and everyone's jobs before we release it" and then chatGPT launched and they just threw all that work in the garbage rev. howard arson @theophite.bsky.social · 2mo like right now a guy whose first business was a VPN marketed to satanist child sex extortionists is running random foreign service officers' resumes through the world's worst LLM (which has an 'erotic' mode) to find out which ones to fire. how do you think that will affect sales
rev. howard arson @theophite.bsky.social · 2mo i really do love this stuff, man, and because a bunch of MBAs decided to do what they did, i'm going to spend the rest of my career fixing what they broke, breaking what i built, or apologizing for what i did SE Gyges 🏴☠️ @segyges.bsky.social · 2mo have you considered the possibility that this stuff just inherently makes people insane, as a domain rev. howard arson @theophite.bsky.social · 2mo rip to everyone else who went mad staring into the aleph but im different
To yours and especially @dame.is's point: a tragicomic summary of the dynamics at play that lives rent-free in my mind.
There are dozens of us—DOZENS!
I have a vague recollection of trying an earlier version of this and having it dock points for having a Bsky link in my X: The Everything App profile, impugning my cogsec for chasing fads. Or maybe I dreamed that
Zach Brockway @neverender.bsky.social · 6mo Same. But I made the apparent mistake of liking Alex’s original post where he said “in the long-term i don't really care if people with reactionary opinions that i do not really respect blacklist me.” Guess what? Everyone who liked that post ended up getting added to the “AI artist” block list! Zach Brockway @neverender.bsky.social · 6mo If anyone had bothered asking, I might’ve tried to explain that I don’t use generative image models at all, and play around with LLMs locally under a microscope because “I just think they’re neat.” I think it’s Bad, Actually that companies can sack and launder the commons and privatize the profit! godoglyness @godoglyness.bsky.social · 6mo my biggest problem with this website (that isn't missing features: GCs pls? Private accounts pls?) is the propensity of some of the communities here to really zealous blocking your views seem perfectly reasonable & i don't think "pros" or "antis" should block you for them, whether or not they agree Zach Brockway @neverender.bsky.social · 6mo I think the ability to make a conscious decision to completely cut someone out of your social graph is fine! Great, even! “Nuclear block” is objectively good IMO. But outsourcing that decision to random people who accumulate lists of thousands of perceived enemies? I’m not so sure about that.
Between the specter of death threats here for learning in public about forbidden applications of linear algebra, and the fact that I don't want to help delay the demise of Twitter, I've more or less given up.
I'm already blocked by 28,000 people, some of whom I respect, for *liking one post*!
she gamma on my return til i argmax
We should have a made a world where giving a child a tablet was as wholesome and enriching as giving them Encarta 95 or a stack of DK books.
I still don't entirely understand why we failed.
I've cobbled together syco-bench, a benchmark of model sycophancy. It consists of tests measuring three things: bias towards user in an argument, mirroring user views, and overestimating user IQ. Here are the results, higher scores are worse. See the following tweet for important caveats:
anyway it turns out I stumbled ass-backward into reinventing textual inversion
"We will introduce our methodology for constructing attribution graphs while working through a case study regarding the model’s ability to write acronyms for arbitrary titles. In the example we study, the model successfully completes a fictional acronym. Specifically, we give the model the prompt `The National Digital Analytics Group (N` and sample its completion: `DAG)`. The tokenizer the model was trained with uses a special “Caps Lock” token, which means the prompt and completion are tokenized as follows: `The` `National` `Digital` `Analytics` `Group` `(``⇪``n``dag`."
"The tokenizer used by Claude 3.5 Haiku includes some special tokens which are depicted in some of our plots. These include two capitalization tokens (↑, ⇪) and a new-line token (⏎)."
Rare Anthropic tokenizer alpha. (h/t @andersonbcdefg.bsky.social)
I actually find myself thinking about it more often than I used to, but mainly in the context of how we've gotten so negatively polarized against the enabling technologies that I'm not sure what would happen if someone tried to invent and unveil it now.
I'm a dilettante, but I think Zotero is great. Automatically pulls metadata for whatever PDFs I throw at it, can rename files accordingly, provides annotation tools. Plus: it's a dark mode PDF reader!
They sell an (affordable) sync service, so Google Drive is probably a no-go. 300 MB free, though.
Don’t mind me, I’ll just be over here using my Super FX chip to design stealth aircraft.
Unfortunately undermining the kernel of nuance, and to your point: bsky.app/profile/lu.i...
be the case study you want to see in the world
Screen capture of the "Touch the Stove" game show from The Simpsons. The host and contestant stand on a stage in front of a live audience. Between them is an electric range with red hot burners.
Blocked/blocking seems broken right now, presumably a side effect of explosive growth. The number going up (and eventually down) wildly is, I infer, an ill-advised attempt at a loading spinner for something that currently never loads. Lists seem to populate OK though.
(*: OK, OK, more like ~223ns/it after this One Weird Trick cloud providers don't want you to know about.)
Really great talk. How do you take a microbenchmark from a noisy ~637ns* per iteration (with spikes to 10+µs!), to an extremely tight ~163ns?
TIL you can isolate cores from the Linux kernel scheduler.
Also, if Intel Processor Trace turns out not to be really usable on Windows, I will be very sad.
the big sleep