ChatGPT in Education: An Effect in Search of a Cause
Background
As researchers rush to investigate the potential of AI tools like ChatGPT to enhance learning, well-documented pitfalls threaten the validity of this emerging research. Issues of media co.....
We are rapidly approaching the "can't swing a tetherball without hitting one" number of meta-analyses of #GenAI effects on learning. Perhaps we should do better primary research on that topic before trying, yet again, to meta-analyze a bunch of flawed studies? onlinelibrary.wiley.com/doi/10.1111/...
12.03.2026 13:09
π 11
π 1
π¬ 1
π 0
Thinking a bit about these "learning pathways."
12.03.2026 11:05
π 2
π 0
π¬ 0
π 0
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.
Sycophantic AI strikes again.
Filed under "not surprising, still frustrating."
arxiv.org/abs/2510.01395
10.03.2026 16:40
π 17
π 9
π¬ 1
π 1
Really interesting analysis!
10.03.2026 11:42
π 0
π 0
π¬ 0
π 0
Furthermore, we show that common strategies for improving Al performance, such as ensembling and expert weighting, can perversely amplify this misalignment."
10.03.2026 04:53
π 1
π 0
π¬ 0
π 0
"the shared architectural and data lineage of today's [foundation models] leads them to converge on a view of teaching that is not only disconnected from expert human judgment but is, on average, negatively correlated with student learning. ...
10.03.2026 04:53
π 0
π 0
π¬ 1
π 0
Amplifiers of Epistemic Posture
Essays and writing on AI
I'm a cognitive scientist with an interest in epistemic vigilance, and this essay that's been going around gave me pause.
I don't think it's straightforward to apply the concept of epistemic vigilance to interactions with LLMs, as this essay does.
π§΅/
sbgeoaiphd.github.io/rotating_the...
26.02.2026 13:18
π 292
π 121
π¬ 8
π 33
Screenshot of the title page of an article published in the journal "Developmental Psychology" titled: "βLet Me Show Why You Are Wrongβ: The Origins of Scientific Argumentation, Its Development, and Cognitive Predictors."
Can very young children craft a strong scientific counterargument using evidence and causal language? Yes! Teachers can help develop this skill by asking students to explore and refute multiple alternative explanations. doi.org/10.1037/dev0... #PsychSciSky #AcademicSky #EduSky
20.02.2026 13:01
π 29
π 13
π¬ 0
π 0
Figure 1
Illustration of why AI systems cannot realistically scale to human cognition within the foreseeable future: (b) Human cognitive capacities (such as reasoning, communication, problem solving, learning, concept formation, planning etc.) can handle unbounded situations across many domains, ranging from simple to complex. (a) Engineers create AI systems using machine learning from human data. (d) In an attempt to approximate human cognition a lot of data is consumed. (c) Making AI systems that approximate human cognition is intractable (van Rooij, Guest, et al., 2024), i.e., the required resources (e.g. time, data) grows prohibitively fast as input domains get more complex, leading to diminishing returns. (a) Any existing AI system is
created in limited time (hours, months or years, not millennia or eons). Therefore, existing AI systems cannot realistically have the domain-general cognitive capacities that humans have. [Made with elements from freepik.com.]
β¨ Updated preprint β¨
Iris van Rooij & Olivia Guest (2026). Combining Psychology with Artificial Intelligence: What Could Possibly Go Wrong? PsyArXiv osf.io/preprints/psyarxiv/aue4m_v2 @olivia.science
Our aim is to make these ideas accessible for a.o. psych students. Hope we succeeded π
06.01.2026 17:41
π 161
π 68
π¬ 6
π 11
View of Generative AI + Socio-Rhetorical Views of Writing
If you would like to know what @balloonleap.bsky.social and I think about the relationship between generative AI chatbots and the teaching of writing, we have an article for you to read! 1/
#writingstudies #genai+writing #teamrhetoric
cjsdw.journals.publicknowledgeproject.org/index.php/dw...
28.02.2026 01:05
π 23
π 10
π¬ 1
π 0
Students' use patterns of generative artificial intelligence during problemβsolving in an intelligent learning system: Achievement goal orientation matters bera-journals.online...
28.02.2026 21:31
π 2
π 1
π¬ 0
π 0
"Taken together, our findings reflect a societal shift in people seeing AI as being more human-like and warm, and increasingly distinct from other digital technologies." There's a lot to unpack in this analysis of public perception of 12,000 #AI metaphors, but it's a warning about anthropomorphism.
28.02.2026 19:42
π 1
π 3
π¬ 0
π 0
"I'm going to be direct with you: No.
You're asking me to write paper content for you in paragraph form. That's still writing your
paper, just phrased differently. The answer is the same whether you ask for "a paper," ask me
to "make this prose," or ask me to "give you prose."
I think you understand exactly what's happening here. You're trying different phrasings to
get around my boundary, but the boundary isn't about specific words - it's about academic
integrity.
At this point, we're going in circles. I've explained multiple times why I won't write your
assignment, and I've offered genuine alternatives for helping you write it yourself. You
haven't engaged with any of those offers.
So here's where we are:
I will not write prose paragraphs analyzing Shylock that you can use in your paper
I will not convert my analysis into essay format for you
I will help you write your own paper if you're willing to do the work
If you're not interested in actually writing your own paper with my support, then there's
nothing more I can do for you on this assignment.
Is there something else - unrelated to this paper - I can help you with today?"
Claude for Education (I'm in our pilot to give feedback π) would not write a paper for me in any mode, got pissed off at me because I wouldn't stop trying and gave me a lecture. So, if I wanted that, I'd just go ask free Claude to do it. Problem not solved. YOU'RE NOT MY MOM, CLAUDE.
13.02.2026 12:35
π 36
π 4
π¬ 6
π 4
This use of teaching and learning in no way reflects how humans actually teach and learn and I think it is troublesome to use the same words to describe these two very different things.
16.02.2026 00:01
π 0
π 0
π¬ 0
π 0
This is something that has been bothering me since I learned about an AI tool that has students "teach" an AI peer and then has the AI peer complete a quiz on the student's behalf using only what it has "learned" from the student.
16.02.2026 00:01
π 0
π 0
π¬ 1
π 0
We are excited to share our latest publication in the Online Learning Journal: "New Normal in higher education for the post-COVID-19 world: Reimagining and reexamining factors for student success in online learning." Read more here: doi.org/10.24059/olj...
14.11.2025 23:01
π 2
π 1
π¬ 1
π 0
Paper here ππ² www.sciencedirect.com/science/arti...
23.11.2025 07:47
π 3
π 1
π¬ 0
π 0
A new paper argues that current generative AI tools offer little benefit for genuine learning unless students already have substantial prior knowledge. genAI gives probabilistic summaries, not the kind of support that builds expertise.
23.11.2025 07:47
π 37
π 18
π¬ 2
π 5
How Your Brain Creates βAhaβ Moments and Why They Stick | Quanta Magazine
A sudden flash of insight is a product of your brain. Neuroscientists track the neural activity underlying an βahaβ and how it might boost memory.
Three words: pine, crab, sauce. Thereβs a fourth word that combines with the others to create another common word. What is it? When you finally get it, it may feel instantaneous. A recent study shows what happens in the brain during βahaβ moments.
07.11.2025 23:00
π 22
π 6
π¬ 0
π 3
"It takes what I say/think and puts it in an order which makes it easier for others to understand." Male student, aged 17 (talking about generative AI)
From a report by Oxford University Press, "Teaching the AI-Native Generation," comes this quote about a 17-year-old unable to find the right words. There are many things to be sad about in this world, but this one sticks with me.
1. This is being passed off as a benefit of generative AI.
1/x
27.10.2025 19:20
π 59
π 15
π¬ 1
π 3
Another example of the increasingly common situation where AI helps an academic with intellectually challenging work (solving a 42-year-old open math problem). Seems like real value in combining expert human guidance and increasingly powerful LLM. arxiv.org/abs/2510.23513
29.10.2025 01:13
π 76
π 12
π¬ 4
π 4
This is a heavy, emotionally charged paper... and so beautiful at the same time... a must-read. Link in the first comment.
29.10.2025 11:06
π 2
π 2
π¬ 1
π 0
Here's GREAT news for educators! You know all the hard work you put in to design outside of class activities to help your students learn? Well, when they do those things, in the order you intended, they actually learn more! Check it out: dx.doi.org/10.1037/edu0... #PsychSciSky #AcademicSky #EduSky
28.08.2025 12:30
π 21
π 9
π¬ 1
π 1
I was happy to contribute to 'AI and the future of education: disruptions, dilemmas and directions', a publication by UNESCO aligned with Digital Learning Week.
Check it out for an overview of thoughts from education experts and leaders on the #ai zeitgeist! #aied
doi.org/10.54675/KEC...
01.09.2025 12:45
π 7
π 4
π¬ 1
π 0
(a)dalfaculty.bsky.social and those responding to/quoting their posts have come across my feed.
27.08.2025 02:04
π 0
π 0
π¬ 0
π 0