Trending
Antonio Tejero-de-Pablos's Avatar

Antonio Tejero-de-Pablos

@toni-tiler

Research scientist in computer vision / Samurai / Rapper

461
Followers
220
Following
111
Posts
20.11.2024
Joined
Posts Following

Latest posts by Antonio Tejero-de-Pablos @toni-tiler

Yeah, it brings you what I call the good old cognitive clean-up (a.k.a. freeing your RAM memory)

26.02.2026 12:51 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Making sure you're not a bot!

"Identity Theft as Systemic Risk" freakonometrics.hypotheses.org/88724

22.02.2026 08:16 ๐Ÿ‘ 3 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Come to think of it, it's not very nice that the size of a GitHub repository isn't generally mentioned in the README๐Ÿค”

19.02.2026 02:43 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Peer review needs to be fundamentally rethought to value real expertise and not just count a number of eyeballs. So much of the value weโ€™ve gotten in the past is very automatable!

16.02.2026 02:40 ๐Ÿ‘ 14 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

We really need to think about slowing down for better science at first

I think that the flood of submissions like 10x more submissions are not making 10x more meaningful progress in community

09.02.2026 06:35 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The system is all broken, my friend. I also wonder why they are not implementing stuff already. LLM reviewers are kind of a thing now, but as you said, I don't think it addresses the root of the problem.

09.02.2026 01:35 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Learn to multiply, divide, etc. by hand before using a calculator. Historically it's been always the same. They'll have plenty of time to prod-up with LLMs once they enter a company, and their boss is like "why aren't you using AI?" LOL

05.02.2026 07:10 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Topic 1 sounds interesting enough!

03.02.2026 08:08 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

1. dissemination & awareness,
2. nurturing a community,
3. Education of scientific praxis via working with peers

I don't see how that differs from an international conference. As seen in CVPR, ICLR, NeurIPS, etc., this "competition" mentality only leads to rushed manuscripts and BS reviews.

02.02.2026 01:30 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

It was a domestic computer vision conference for which I had the chance to be AC. Given my previous experiences, I wanted to clarify the reviewers' role not as "rejecting bad papers" but "getting all papers in shape so they could be accepted to the conference".

30.01.2026 03:55 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The nice CVPR lottery. You get papers like this accepted and others rejected because they are not novel enough. What a circus.

28.01.2026 07:08 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Amen

20.01.2026 08:26 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I would let the burden on the reviewers to write actual useful and specific reviews, so authors can finally address what reviewers think it is a flaw. Similarly, I would let the burden on the reviewers to accept/reject papers based on those specific points, for the sake of the authors' mental health

20.01.2026 08:23 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Bravo!

15.01.2026 13:57 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Can you do something about the governments that are introducing AI into their military forces?

15.01.2026 13:52 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

The future of peer review?
When an LLM-written paper is reviewed by an LLM.

22.12.2025 10:57 ๐Ÿ‘ 7 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
โ€œBut perhaps this can be resolved by the realization that while cleverness and intelligence are somewhat correlated traits for humans, they are much more decoupled for AI tools (which are often optimized for cleverness), and viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems.โ€

โ€œBut perhaps this can be resolved by the realization that while cleverness and intelligence are somewhat correlated traits for humans, they are much more decoupled for AI tools (which are often optimized for cleverness), and viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems.โ€

I doubt that anything resembling genuine AGI is within reach of current AI toolsโ€”Terence Tao

mathstodon.xyz/@tao/1157223...

22.12.2025 07:44 ๐Ÿ‘ 91 ๐Ÿ” 12 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 4
Why Scaling Is Not Enough

I believe in scaling laws and I believe scaling will improve performance, and models like Gemini are clearly good models. The problem with scaling is this: for linear improvements, we previously had exponential growth as GPUs which canceled out the exponential resource requirements of scaling. This is no longer true. In other words, previously we invested roughly linear costs to get linear payoff, but now it has turned to exponential costs.

Frontier AI Versus Economic Diffusion

The US and China follow two different approaches to AI. The US follows the idea that there will be one winner who takes it all โ€“ the one that builds superintelligence wins. Even coming short of superintelligence of AGI, if you have the best model, almost all people will use your model and not the competitionโ€™s model. The idea is: develop the biggest, badest model and people will come.

Chinaโ€™s philosophy is different. They believe model capabilities do not matter as much...

Why Scaling Is Not Enough I believe in scaling laws and I believe scaling will improve performance, and models like Gemini are clearly good models. The problem with scaling is this: for linear improvements, we previously had exponential growth as GPUs which canceled out the exponential resource requirements of scaling. This is no longer true. In other words, previously we invested roughly linear costs to get linear payoff, but now it has turned to exponential costs. Frontier AI Versus Economic Diffusion The US and China follow two different approaches to AI. The US follows the idea that there will be one winner who takes it all โ€“ the one that builds superintelligence wins. Even coming short of superintelligence of AGI, if you have the best model, almost all people will use your model and not the competitionโ€™s model. The idea is: develop the biggest, badest model and people will come. Chinaโ€™s philosophy is different. They believe model capabilities do not matter as much...

โ€œWhy AGI Will Not Happenโ€ by Tim Dettmers.

timdettmers.com/2025/12/10/w...

This essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: โ€œWinner takes allโ€ approach of building the biggest thing vs a long-term focus on practicality.

14.12.2025 03:04 ๐Ÿ‘ 54 ๐Ÿ” 14 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 4

These companies are not interested in small-scale solutions, since that would lower the infrastructure bar for more researchers (from both industry and academia) to enter the game. They are not creating AI. They are creating a product that can be monopolized.

14.12.2025 06:26 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I think there are many more reviewers that do not use LLMs that shouldn't be reviewers than reviewers that use LLMs. Did that make sense? An irresponsible reviewer will be irresponsible with LLMs, without LLMs, here or on the moon.

12.12.2025 07:38 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

The Ilya Sutskever episode with Dwarkesh Patel is now available
www.youtube.com/watch?v=aR20...

25.11.2025 19:45 ๐Ÿ‘ 16 ๐Ÿ” 6 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Thanks for sharing, Prof. Bengio. I have a concern with calling everything "AI". Most systems that are targeted for regulation are LLMs, but we still say "AI" this and that, even when it's not machine learning. It's like calling "vehicles" to cars, airplanes, etc and try to regulate them all at once

25.11.2025 13:15 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I've taken courses on how to design effective active learning for university curricula, but none of them targeted engineering students. I've always struggled to come up with innovative methods for learning complex and cumulative engineering and math concepts (probability theory, electromagnetism,..)

25.11.2025 01:58 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Am I the only one who when vibe-codes feels like "the LLM whisperer"?

17.11.2025 08:42 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

To me ACing became a game of "find the liar" as well ๐Ÿ˜ฎโ€๐Ÿ’จ

12.11.2025 01:24 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Transfer learning and self-supervised learning have changed how we build neural architectures. Reusing representations from one task saves time and data, but tuning for a new problem is never plug-and-play. Always check what actually transfers. ๐Ÿค” #ItzikThoughtLoop

30.10.2025 02:29 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Salmon is THE superfood ๐Ÿ‘

07.10.2025 01:47 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Since it resonated with the audience, Iโ€™ll recap my main argument against AGI here. โ€˜General intelligenceโ€™ is like phlogiston, or the aether. Itโ€™s an outmoded scientific concept that does not refer to anything real. Any explanatory work it did can be done better by a richer scientific frame. 1/3

02.10.2025 22:09 ๐Ÿ‘ 642 ๐Ÿ” 158 ๐Ÿ’ฌ 15 ๐Ÿ“Œ 11
How AI Taught Itself to See [DINOv3]
How AI Taught Itself to See [DINOv3] YouTube video by Jia-Bin Huang

How AI Taught Itself to See

Self-supervised learning is fascinating! How can AI learn from images only without labels?

In this video, weโ€™ll build the method from first principles and uncover the key ideas behind CLIP, MAE, SimCLR, and DINO (v1โ€“v3).

Video link: youtu.be/oGTasd3cliM

16.09.2025 23:13 ๐Ÿ‘ 12 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Attitude matters.

03.09.2025 07:23 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0