Something like this maybe? From: Global and Efficient Self-Similarity for Object Classification and Detection - 2010
Something like this maybe? From: Global and Efficient Self-Similarity for Object Classification and Detection - 2010
So they went with it even though it reduces the performance?
From my experience, benchmarks are less risky. I tried method but was humbled after finding out that I was working on "the wrong end" of the model. That said, I know of multiple benchmark papers with serious errors. Then again, method papers do many "tricks" just for sota and are not comparable π
Thats the article I just read.
I was wondering the same and according to an WP article, it refers to nationalizing private business and foreign-owned assets (e.g. ExxonMobil) in 1976 and 2007.
Feels like papers where they claim "without additional training data we achieve x" and then you look and they just use an insanely huge foundation model and you can't even know for sure if the new method is actually contributing to the problem.
Looks very interesting :) Thanks for the pointers. Are you, by any chance, aware of such works for vision? I'm writing a paper at the moment, and this kind of research would fit perfectly.
Yes π We also observed that fewer students come to voluntary exercises and then fail the exam. And I recall a tweet about the MIT databases course where the lecturer mentioned that he never had so few questions but so many students struggling in exams. Asking a human is prob still better than a bot.
I did this a few semesters ago with our undergrad data science course. The thingy successfully downloaded PokΓ©mon data from an API, transformed dicts and images, made plots, and applied PCA, K-Means, and DBSCAN without issues π The assignment had many detailed instructions but I was still impressed.
We also have free A100 in public libraries π
π
Was thinking the same...
Ah yes. Haven't thought about that, but it would be so relatable. I remember one instance where I observed this pattern. Convincing people that a "small update" is enough was surprisingly hard.
Do you think this "creep" is simply overcompensation, or is there more to it? Like an idealistic but unrealistic goal? In computer vision, averaged runs would be a great improvement already. But getting to actual statistical significance is much harder, and I'm not sure if it's worth it.
I remember well when I first read the "AI Revolution hasn't happened yet" article by Michael Jordan. Since then, I always preferred his "Intelligence Augementation" view over the "human-imitative AI" approach as he called it.
hdsr.mitpress.mit.edu/pub/wot7mkc1...