Daniel Farina's Avatar

Daniel Farina

@danfarina

547
Followers
2,503
Following
3,719
Posts
16.11.2023
Joined
Posts Following

Latest posts by Daniel Farina @danfarina

… looking over your work in real time.

11.03.2026 23:41 👍 0 🔁 0 💬 0 📌 0

I also enjoy and appreciate the sports and training metaphor: there is no substitute for practice. But in sports, you get outsized returns from coaching rather than simple struggling. LLMs are not quite accurate enough be a coach, but they can help you “coach yourself” when you can’t afford someone

11.03.2026 23:41 👍 0 🔁 0 💬 1 📌 0

This is sometimes the best option when there is no human-quality-assured resource conjugated to the precise problem in front of you in that moment.

11.03.2026 23:37 👍 0 🔁 0 💬 1 📌 0

I have used it to both cut down large manuals as well as relying on its memorized base as a form of light-weight, again “Wikipedia-ish” information retrieval. But I have to wield it as a device and perform triangulation and my own work to check it.

11.03.2026 23:37 👍 0 🔁 0 💬 1 📌 0

I think this is false, except with the major practical problem of cheating, the dominant channel here. I think it is a wonderful resource to ask questions about intermediate steps in exercises problems or generate problems.

11.03.2026 23:37 👍 0 🔁 0 💬 1 📌 0

I don’t love the Big Test dynamic, where three badly placed colds could significantly deteriorate your prospects in some fields, but we survived I suppose.

11.03.2026 23:22 👍 0 🔁 0 💬 1 📌 0

You didn’t even have to be cheating to have that problem. Inefficient studying also caused it, something I learned from personal experience.

11.03.2026 23:20 👍 0 🔁 0 💬 1 📌 0

I suppose I’m more sanguine about this except for the problem of cheating/game theory. I’m old enough where exams carried most of the grade weight and were on paper, and that was just early 2000s. People that don’t get the adequate speed and precision will figure out their problem or wash out.

11.03.2026 23:20 👍 0 🔁 0 💬 1 📌 0

This presents a serious practical problem in the game theory sense, for students that insist on doing the homework at lower accuracy to learn. But the fact we have Wikipedia-with-attention-heads is responsible in an indirect way, just as the Shahed drone is a descendent of mass market smartphones.

11.03.2026 23:14 👍 0 🔁 0 💬 0 📌 0

I mean, isn’t that basically a “huh now we can conjugate the integration table with your homework and do your homework” I.e. basically scalable, inexpensive, and in many cases accurate homework copying, for those that don’t take the training aspect seriously?

11.03.2026 23:11 👍 0 🔁 0 💬 2 📌 0

Is the causal channel you propose here is that students lose their ability to make clear writing, I.e. the LLM is too acceptably forgiving with its input?

11.03.2026 23:09 👍 0 🔁 0 💬 1 📌 0

You can just as well have an LLM, backed up by a symbolic calculus program for accuracy, to generate problem sets to attack weak points. We just generally do not expect students to do that.

11.03.2026 23:06 👍 0 🔁 0 💬 0 📌 0

That may be true, and my remark was: “not universally true.” In the above example, I am not an expert in matplotlib and am not seeking to become one, and my use of the LLM
belies that. The problem for students is a scalable resource to cheat, no different than copying homework.

11.03.2026 23:04 👍 0 🔁 0 💬 2 📌 0

<Fancy cat meme>

11.03.2026 22:47 👍 0 🔁 0 💬 0 📌 0

This is partially an economic argument of sorts: reviewing three of plots about information in my domain rather than swapping in the charting library design theory is the mastery I seek overall. Put another way, the LLM is least able to substitute the novel thing you know well and are working on.

11.03.2026 22:44 👍 0 🔁 0 💬 1 📌 0

There is real value in conjugating a memorized manual of matplotlib against the CSV I have right here, right now. Picking up the manual to remember how to put the legend on the right side rather than the left is not redeeming activity.

11.03.2026 22:35 👍 0 🔁 0 💬 1 📌 0

They have a lot memorized, in the same way English Wikipedia is 150GB of text and large models tend to be around a terabyte. This memorization is essential, I feel, even in the context of RAG. But notably, not petabytes, like a person. The comparison to an index is not mistaken.

11.03.2026 22:34 👍 0 🔁 0 💬 1 📌 0

And a plausible answer is: no smartphones. No massive economies of scale on GPS receivers, inertial sensors, mems devices, powerful processors, graphics chips, cameras, storage, batteries. The Shahed is the drone the smartphone wrought.

11.03.2026 21:04 👍 2 🔁 0 💬 0 📌 0

One of my ongoing (haha only serious) jokes is that the smartphone is the technology that will destroy humanity, not AI, usually in reference to extremely strange belief structures from mobile broadband.

But I wondered: why was the Shahed drone/missile not invented twenty years ago?

11.03.2026 21:04 👍 3 🔁 0 💬 1 📌 0

Yeah, I think it will look like that. I would like to see a genuinely Wikipedia-like training infrastructure and public responsibility. I think there’s a world where we can get there, an open training institution, not just open weights.

11.03.2026 20:40 👍 0 🔁 0 💬 0 📌 0

I compare LLMs to non-consensual Wikipedia, as it sucks in our material without asking, vs choosing to participate. How useful is Wikipedia to you from a year ago? Actually, pretty useful…yet many inferences will degrade from old information, unless you rag in the new stuff to fix it.

11.03.2026 20:38 👍 1 🔁 0 💬 0 📌 0

Not that the latent weights hit zero value for a long time. But we may see a regression in usefulness by not picking up the edge. That may be economic equilibrium though. Well, so be it I guess, I prefer to do business in the unit-economics-positive environment.

11.03.2026 20:32 👍 1 🔁 0 💬 2 📌 0

I think we really do need updates though. Thinking about my own use of the models, not entirely unlike a human, broad memorization is essential. Otherwise I have to RAG far too many background facts that are only semi-static, like Postgres 18 vs 16.

11.03.2026 20:30 👍 1 🔁 0 💬 1 📌 0

It’s hard for me to believe that that Netanyahu believed in the technical merit of Iranian opposition formation from air strikes. But it solves a political problem today, and he’s reasonably confident he can figure out tomorrow’s political problem at that time.

11.03.2026 20:20 👍 0 🔁 0 💬 1 📌 0

Leaders routinely don’t do stuff the polity wants, especially if it’ll lead to a terrible governing outcome in the near term. But every leader is different, some are very prone to winging it when that future comes.

11.03.2026 20:08 👍 0 🔁 0 💬 1 📌 0

Similar idea: does an index in the back of a book letting you skip chapters “atrophy” you?

11.03.2026 20:04 👍 2 🔁 1 💬 1 📌 0

Not universally true. You can more quickly learn an adjacent area of expertise too. For example: you could plow through the manual for an entire charting library (but you don’t have the time) or check the rendered graphs and have the LLM more rapidly let you explore how to use a slice of it.

11.03.2026 20:04 👍 2 🔁 0 💬 1 📌 0

You think so? A war may be a popularist move in Israel, so the probability is higher, but few statesmen are as dogged and self-absorbed survivors as Netanyahu.

11.03.2026 18:43 👍 0 🔁 0 💬 1 📌 0

My best guess of this from polls, is this is a winning issue for Netanyahu who, after things cooled off a bit in Gaza, people just started asking too many questions about all that inconvenient stuff from before. The guy is nothing else if not a survivor, one day at a time.

11.03.2026 03:54 👍 0 🔁 0 💬 1 📌 0

Followed by, “was supporting the Islamic Republic in the 2020s effective altruism?” On the lesswrong of 2060.

11.03.2026 02:44 👍 4 🔁 0 💬 0 📌 0