Good points, thanks Ken!
I guess I don't need to change my lecture slides?
Good points, thanks Ken!
I guess I don't need to change my lecture slides?
RIP redundancy reduction?
Beautiful work by Liu & colleagues showing that neural redundancy increases with learning, as predicted by a Bayesian model:
www.science.org/doi/10.1126/...
Elmore Leonard (a fantastic writer whose novels I almost always finish) was once asked about the secret of his success, and his reply was "I leave out the parts that readers tend to skip."
Clearly the author thinks you need more to "get it" than you think you do. You make an inference about what's important and everything else gets ignored.
But more seriously, I think there's an interesting psychological question here. Why do we stop reading? I think it often (at least for me) has a lot to do with the feeling that we've "gotten it" and that the rest of the book feels somehow unnecessary.
I loved the first two sections of this blog post!
Yes, I agree. Although there are other potential issues like deference of junior reviewers to more senior reviewers (would love to see this quantified... maybe it's a non-issue).
I completely agree though about the need for accountability. I think this is the role of editors / metareviewers, who know the identities of both the authors and reviewers. Right now there isn't a global enforcement mechanism that imposes reputational consequences on bad behavior.
This is worth considering, but I'm skeptical. Nobody likes being criticized, and authors are likely to take it out on the criticizer. I expect that this will inflate the positivity of reviews. The "prestige" of being listed as a reviewer is currently so minimal that it's not worth the danger.
Same
Good article on consequences of low peer review participation.
Given that most papers get 2-3 reviews, all scientists should review at least twice as many papers as they submit.
π Postdoc Alert! Are you passionate about social learning & cultural evolution? @dominikdeffner.bsky.social & I have a 3-year position with freedom to develop your research and work on cutting-edge multiplayer and immersive experiments. Apply by March 30! hmc-lab.com/SocialLearni... Pls share π
@ucsandiego.bsky.social once again leading the way on high-quality teaching faculty positions. We should all follow!
Yes, I think so!
We really wanted to do that, but it's not possible. Re-extension after contraction takes ~45s, so you would get lots of interference between responses to the weak and strong taps. We might be able to do this with other preps (e.g., voltage measurements in an immobilized cell).
It's known (from earlier work by David Wood) that mechanosensory habituation is mediated by some change at the mechanoreceptor which reduces stimulation-induced potentials. There are several ways this could happen. Currently unknown if this mediates associative learning.
Many thanks to Sandy Doan for leading this project, along with Austen Theroux and Tejas Ramdas. They put a huge amount of work into this, collecting data from thousands of cells.
After several years of work, my lab is starting to put out our first papers on learning in a unicellular organism (Stentor coeruleus).
Here we show evidence for a form of associative learning in Stentor:
www.biorxiv.org/content/10.6...
It's interesting to me how people in these situations fail to fully appreciate what the future equilibrium will feel like. I think many collective action failures like this arise from prospection failures, or perhaps just myopia (students won't stick around to experience the consequences).
lots of good points have already been made on using AI Agents for cheating (e.g. the latest Canvas-bot), it degrades learning, etc.
One additional thing I'd like to point out: if you use this stuff, you're not being clever, you're just an asshole.
to explain:
Glad to see that at least some organizations are upholding standards.
www.politico.eu/article/isis...
If we really did move to the new equilibrium, you'd see: (i) fewer college students overall; (ii) less emphasis on assessment for the remaining students; (iii) more alternatives to liberal arts education (e.g., specialized training programs).
I don't think we're moving toward that new equilibrium. Rather, we're trying to stay in the current equilibrium (trying to force students to learn stuff they'd rather not learn), but it's increasingly unstable and unsatisfactory.
I liked cat-and-mouse games better when the mouse wasn't a supercomputer supported by infinite venture capital.
That's basically what's happening now. My "optimism" refers to achieving a different equilibrium where everyone gets what they want (students don't need to learn if they don't want to, profs get to focus on the students who want to learn, and those students benefit as well).
In this scenario, far fewer people go to college. I don't think that's necessarily a bad thing. The current situation invites us to imagine what it would be like to unbundle aspects of college that aren't intrinsically linked.
One possible end point is that college education simply ceases to be meaningful for most people, and we find other ways to provide the social and professional services that were previously bundled with education. The remaining students are the ones who really want to learn.
In my view, AI tools for automating student work are the natural apotheosis of trends that were already there. The difference is that now it's increasingly hard, perhaps impossible, to implement countervailing incentives and constraints.
As a professor, I now do that work. But it's hard not to wonder: why? If people don't want to learn, why make them? For many undergrads, I think a lot of this is basically performative, a step on the road to greater things.
I have a contrarian but optimistic take on the unfolding AI apocalpyse in higher education.
Since I was an undergrad, I could see that most students (including Ivy League schools) were not really all that interested in learning. A lot of work was required to incentivize them.