Automatic differentiation in forward mode computes derivatives by breaking down functions into elem operations and propagating derivatives alongside values. Itβs efficient for functions with fewer inputs than outputs and for Jacobian-vect prod, using for instance dual numbers.
13.12.2024 06:00
π 37
π 10
π¬ 2
π 0
A figure from the attached paper showing the difference in output between a benchmark model, and one with the super weight removed. The benchmark model generates a reasonable answer, the one where the weight is missing generates complete gibberish
#ai, #ml or #llm people here, what do you think about the βsuper weightβ paper?
TLDR: deleting one single weight from a 7B model turns it completely incoherent, destroying itβs ability to generate legible text.
arxiv.org/pdf/2411.07191
01.12.2024 07:05
π 33
π 7
π¬ 3
π 0
Add me please
28.11.2024 03:51
π 1
π 0
π¬ 0
π 0
What are these starter packs? What are the requirements to get in?
28.11.2024 02:41
π 0
π 0
π¬ 1
π 0
I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux
Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!
23.11.2024 19:54
π 176
π 54
π¬ 101
π 4