Johannes Theodoridis's Avatar

Johannes Theodoridis

@johannestheo

PhD student in ML @uni_tue & @hdm_stg Interested in robust vision and object-centric learning πŸŒŒπŸ”΄πŸ”ΆπŸ’™πŸŸ©

33
Followers
209
Following
15
Posts
29.05.2025
Joined
Posts Following

Latest posts by Johannes Theodoridis @johannestheo

Post image

Something like this maybe? From: Global and Efficient Self-Similarity for Object Classification and Detection - 2010

05.02.2026 12:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

So they went with it even though it reduces the performance?

27.01.2026 12:39 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

From my experience, benchmarks are less risky. I tried method but was humbled after finding out that I was working on "the wrong end" of the model. That said, I know of multiple benchmark papers with serious errors. Then again, method papers do many "tricks" just for sota and are not comparable πŸ˜…

05.01.2026 12:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Thats the article I just read.

03.01.2026 18:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I was wondering the same and according to an WP article, it refers to nationalizing private business and foreign-owned assets (e.g. ExxonMobil) in 1976 and 2007.

03.01.2026 17:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Feels like papers where they claim "without additional training data we achieve x" and then you look and they just use an insanely huge foundation model and you can't even know for sure if the new method is actually contributing to the problem.

14.11.2025 07:25 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Looks very interesting :) Thanks for the pointers. Are you, by any chance, aware of such works for vision? I'm writing a paper at the moment, and this kind of research would fit perfectly.

26.09.2025 15:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yes πŸ˜… We also observed that fewer students come to voluntary exercises and then fail the exam. And I recall a tweet about the MIT databases course where the lecturer mentioned that he never had so few questions but so many students struggling in exams. Asking a human is prob still better than a bot.

17.09.2025 18:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I did this a few semesters ago with our undergrad data science course. The thingy successfully downloaded PokΓ©mon data from an API, transformed dicts and images, made plots, and applied PCA, K-Means, and DBSCAN without issues πŸ˜… The assignment had many detailed instructions but I was still impressed.

17.09.2025 17:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We also have free A100 in public libraries πŸ˜„

03.09.2025 08:14 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ˜‚

03.09.2025 07:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Was thinking the same...

07.08.2025 07:18 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Ah yes. Haven't thought about that, but it would be so relatable. I remember one instance where I observed this pattern. Convincing people that a "small update" is enough was surprisingly hard.

05.07.2025 14:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Do you think this "creep" is simply overcompensation, or is there more to it? Like an idealistic but unrealistic goal? In computer vision, averaged runs would be a great improvement already. But getting to actual statistical significance is much harder, and I'm not sure if it's worth it.

05.07.2025 11:43 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Artificial Intelligenceβ€”The Revolution Hasn’t Happened Yet

I remember well when I first read the "AI Revolution hasn't happened yet" article by Michael Jordan. Since then, I always preferred his "Intelligence Augementation" view over the "human-imitative AI" approach as he called it.

hdsr.mitpress.mit.edu/pub/wot7mkc1...

14.06.2025 19:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0