Come join us in a great department in a great city! I'm not on the search committee, but feel free to be in touch if you have any questions or anything.
Come join us in a great department in a great city! I'm not on the search committee, but feel free to be in touch if you have any questions or anything.
REMINDER!! We hope to see you all โจTODAYโจ for our social! Check out queerinai.com/icml-2025 for further details! ๐๐
1/ ๐ป Queer in AI is hosting a social at #ICML2025 in Vancouver on ๐ July 16, and youโre invited! Letโs network, enjoy food and drinks, and celebrate our community. Details belowโฆ
As things get ever worse in the UK and the US releases sloppy political โreviewsโ of bad science (magnified by the NYT/Atlantic/etc), a good reminder for anyone who has โconcernsโ about trans youth care.
(Canada seems okay federally for now, but AB/SK are going in v bad directions.)
Screenshot of a slide: AI Scientists still suffer from hallucinations โWe use an LSTM-based neural network (Goodfellow et al. 2016)โ is highlighted, with the comment โShould be Hochreiter and Scmidhuberโ
AI Scientist making the funniest possible citation mistake (Goodfellow instead of Schmidhuber)
Join us tomorrow for our official social at #ICLR2025 on Saturday, 04/26, 5:00 - 6:30 pm in the GHJ room ("hidden" between Hall 2 and 3)! We canโt wait to see you all and to mingle! After the social ends, we will also set out for an unofficial bar tour of downtown Singapore.
Thrilled to announce that Joshuaโs paper won one of three Outstanding Paper awards at ICLR.
Come to the poster on Friday afternoon (#376 in Hall 3) or the talk on Saturday (4:30 in Hall 1), and while youโre at it snag him for a postdoc!
๐ขCurious why your LLM behaves strangely after long SFT or DPO?
We offer a fresh perspectiveโconsider doing a "force analysis" on your modelโs behavior.
Check out our #ICLR2025 Oral paper:
Learning Dynamics of LLM Finetuning!
(0/12)
From the intro to a paper I'm reading:
"Theorem 3 is proven in Section 4 and Theorem 4 is proven in Section 3."
๐
If for some reason youโre *not* in Vancouver for NeurIPS, you can always read the paper I guess:
arxiv.org/abs/2404.04286
What happens when models, like LLMs, learn from each other across multiple generations?
@joshuaren.bsky.social + collaborators extend a cogsci framework called Bayesian iterated learning to think about, and avoid, pitfalls.
#NeurIPS2024 poster #3305, Friday 11-2
(Also, snag him for a postdoc ๐)
This analysis of how attention scores work in graph transformers was pretty surprising to me, and enabled the rest of this paper (and hopefully more!). Come check out the poster soon!
As in Tennessee, this ban does not apply to cis children, because the drugs themselves are not a safety hazard. The only thing at risk is a societyโs discomfort with allowing young people to make their own decisions about their own bodies.
Graph Transformers (GTs) can handle long-range dependencies and resolve information bottlenecks, but theyโre computationally expensive. Our new model, Spexphormer, helps scale them to much larger graphs โ check it out at NeurIPS next week, or the preview here!
[1/13]
#NeurIPS2024
A schedule of the planned events at the Queer in AI workshop at NeurIPS 2024. It will be held in West Meeting Room 202-204 at the Vancouver Convention Centre on 11 December. Opening Talk at 10 AM. Speaker session on "Designing Technology for Gender Transition" at 10:30 AM. Panel on "Queer Creative Storytelling" at 11 AM. Panel on "AI and Data Governance" at 1 PM. Co-working Session on "Participatory AI and Data Governance 1" at 2 PM. Sponsor Networking Session at 3 PM. Co-working Session 2 at 3:30 PM. Closing talk on "Queer Producing in Latin America" at 4:30 PM.
๐๐We have an incredible day planned for you at our #QueerInAI workshop at #NeurIPS2024 in Vancouver on December 11! You can join us in person or virtually via live-stream! Find out more about our events and our lovely speakers at queerinai.com/neurips-2024!! โจโจ
en.wikipedia.org/wiki/Poisson... has an upper and lower bound on the probability but not an exact valueโฆ.
Oh, guess I should add that this is from arxiv.org/abs/2409.06890
Screenshot from the linked paper, showing Proposition 4 which relates HSIC to the mutual information. The proof is based on Pjnskerโs inequality and the Bretagnolle-Hiber bound, with a pointer to Canonne (2023).
I used the same note recently โ itโs a good note!
bsky.app/profile/quai...
Well, I donโt know what else I was expecting.
The main reason I wrote arXiv-collector was to use biblatex cleanly, which can otherwise be a huge pain. If you donโt use biblatex, either is fine; each has some features the other doesnโt (though I prefer my way obviously :p).
Periodic reminder: submitting on #arXiv is great! But the source is available: good per se, but remember to remove LaTeX comments you don't want to be made public.
Thankfully, there are tools that do that (and more) easily and automatically for you! E.g., arXiv-collector:
github.com/djsutherland...