Yeah, it brings you what I call the good old cognitive clean-up (a.k.a. freeing your RAM memory)
Yeah, it brings you what I call the good old cognitive clean-up (a.k.a. freeing your RAM memory)
"Identity Theft as Systemic Risk" freakonometrics.hypotheses.org/88724
Come to think of it, it's not very nice that the size of a GitHub repository isn't generally mentioned in the README๐ค
Peer review needs to be fundamentally rethought to value real expertise and not just count a number of eyeballs. So much of the value weโve gotten in the past is very automatable!
We really need to think about slowing down for better science at first
I think that the flood of submissions like 10x more submissions are not making 10x more meaningful progress in community
The system is all broken, my friend. I also wonder why they are not implementing stuff already. LLM reviewers are kind of a thing now, but as you said, I don't think it addresses the root of the problem.
Learn to multiply, divide, etc. by hand before using a calculator. Historically it's been always the same. They'll have plenty of time to prod-up with LLMs once they enter a company, and their boss is like "why aren't you using AI?" LOL
Topic 1 sounds interesting enough!
1. dissemination & awareness,
2. nurturing a community,
3. Education of scientific praxis via working with peers
I don't see how that differs from an international conference. As seen in CVPR, ICLR, NeurIPS, etc., this "competition" mentality only leads to rushed manuscripts and BS reviews.
It was a domestic computer vision conference for which I had the chance to be AC. Given my previous experiences, I wanted to clarify the reviewers' role not as "rejecting bad papers" but "getting all papers in shape so they could be accepted to the conference".
The nice CVPR lottery. You get papers like this accepted and others rejected because they are not novel enough. What a circus.
Amen
I would let the burden on the reviewers to write actual useful and specific reviews, so authors can finally address what reviewers think it is a flaw. Similarly, I would let the burden on the reviewers to accept/reject papers based on those specific points, for the sake of the authors' mental health
Bravo!
Can you do something about the governments that are introducing AI into their military forces?
The future of peer review?
When an LLM-written paper is reviewed by an LLM.
โBut perhaps this can be resolved by the realization that while cleverness and intelligence are somewhat correlated traits for humans, they are much more decoupled for AI tools (which are often optimized for cleverness), and viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems.โ
I doubt that anything resembling genuine AGI is within reach of current AI toolsโTerence Tao
mathstodon.xyz/@tao/1157223...
Why Scaling Is Not Enough I believe in scaling laws and I believe scaling will improve performance, and models like Gemini are clearly good models. The problem with scaling is this: for linear improvements, we previously had exponential growth as GPUs which canceled out the exponential resource requirements of scaling. This is no longer true. In other words, previously we invested roughly linear costs to get linear payoff, but now it has turned to exponential costs. Frontier AI Versus Economic Diffusion The US and China follow two different approaches to AI. The US follows the idea that there will be one winner who takes it all โ the one that builds superintelligence wins. Even coming short of superintelligence of AGI, if you have the best model, almost all people will use your model and not the competitionโs model. The idea is: develop the biggest, badest model and people will come. Chinaโs philosophy is different. They believe model capabilities do not matter as much...
โWhy AGI Will Not Happenโ by Tim Dettmers.
timdettmers.com/2025/12/10/w...
This essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: โWinner takes allโ approach of building the biggest thing vs a long-term focus on practicality.
These companies are not interested in small-scale solutions, since that would lower the infrastructure bar for more researchers (from both industry and academia) to enter the game. They are not creating AI. They are creating a product that can be monopolized.
I think there are many more reviewers that do not use LLMs that shouldn't be reviewers than reviewers that use LLMs. Did that make sense? An irresponsible reviewer will be irresponsible with LLMs, without LLMs, here or on the moon.
The Ilya Sutskever episode with Dwarkesh Patel is now available
www.youtube.com/watch?v=aR20...
Thanks for sharing, Prof. Bengio. I have a concern with calling everything "AI". Most systems that are targeted for regulation are LLMs, but we still say "AI" this and that, even when it's not machine learning. It's like calling "vehicles" to cars, airplanes, etc and try to regulate them all at once
I've taken courses on how to design effective active learning for university curricula, but none of them targeted engineering students. I've always struggled to come up with innovative methods for learning complex and cumulative engineering and math concepts (probability theory, electromagnetism,..)
Am I the only one who when vibe-codes feels like "the LLM whisperer"?
To me ACing became a game of "find the liar" as well ๐ฎโ๐จ
Transfer learning and self-supervised learning have changed how we build neural architectures. Reusing representations from one task saves time and data, but tuning for a new problem is never plug-and-play. Always check what actually transfers. ๐ค #ItzikThoughtLoop
Salmon is THE superfood ๐
Since it resonated with the audience, Iโll recap my main argument against AGI here. โGeneral intelligenceโ is like phlogiston, or the aether. Itโs an outmoded scientific concept that does not refer to anything real. Any explanatory work it did can be done better by a richer scientific frame. 1/3
How AI Taught Itself to See
Self-supervised learning is fascinating! How can AI learn from images only without labels?
In this video, weโll build the method from first principles and uncover the key ideas behind CLIP, MAE, SimCLR, and DINO (v1โv3).
Video link: youtu.be/oGTasd3cliM
Attitude matters.