Stephanie Boragina 's Avatar

Stephanie Boragina

@sboragina

πŸ‡¨πŸ‡¦ PhD student and instructional designer of online courses Studying math education at the postsecondary level. Currently examining the growth of student mathematical understanding when taking asynchronous online math classes. #MathEd #OnlineEd

711
Followers
1,200
Following
83
Posts
17.09.2024
Joined
Posts Following

Latest posts by Stephanie Boragina @sboragina

Preview
ChatGPT in Education: An Effect in Search of a Cause Background As researchers rush to investigate the potential of AI tools like ChatGPT to enhance learning, well-documented pitfalls threaten the validity of this emerging research. Issues of media co.....

We are rapidly approaching the "can't swing a tetherball without hitting one" number of meta-analyses of #GenAI effects on learning. Perhaps we should do better primary research on that topic before trying, yet again, to meta-analyze a bunch of flawed studies? onlinelibrary.wiley.com/doi/10.1111/...

12.03.2026 13:09 πŸ‘ 11 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Thinking a bit about these "learning pathways."

12.03.2026 11:05 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

Sycophantic AI strikes again.
Filed under "not surprising, still frustrating."
arxiv.org/abs/2510.01395

10.03.2026 16:40 πŸ‘ 17 πŸ” 9 πŸ’¬ 1 πŸ“Œ 1

Really interesting analysis!

10.03.2026 11:42 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Furthermore, we show that common strategies for improving Al performance, such as ensembling and expert weighting, can perversely amplify this misalignment."

10.03.2026 04:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

"the shared architectural and data lineage of today's [foundation models] leads them to converge on a view of teaching that is not only disconnected from expert human judgment but is, on average, negatively correlated with student learning. ...

10.03.2026 04:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Amplifiers of Epistemic Posture Essays and writing on AI

I'm a cognitive scientist with an interest in epistemic vigilance, and this essay that's been going around gave me pause.

I don't think it's straightforward to apply the concept of epistemic vigilance to interactions with LLMs, as this essay does.

🧡/

sbgeoaiphd.github.io/rotating_the...

26.02.2026 13:18 πŸ‘ 292 πŸ” 121 πŸ’¬ 8 πŸ“Œ 33
Screenshot of the title page of an article published in the journal "Developmental Psychology" titled: "β€œLet Me Show Why You Are Wrong”: The Origins of Scientific Argumentation, Its Development, and Cognitive Predictors."

Screenshot of the title page of an article published in the journal "Developmental Psychology" titled: "β€œLet Me Show Why You Are Wrong”: The Origins of Scientific Argumentation, Its Development, and Cognitive Predictors."

Can very young children craft a strong scientific counterargument using evidence and causal language? Yes! Teachers can help develop this skill by asking students to explore and refute multiple alternative explanations. doi.org/10.1037/dev0... #PsychSciSky #AcademicSky #EduSky

20.02.2026 13:01 πŸ‘ 29 πŸ” 13 πŸ’¬ 0 πŸ“Œ 0
Figure 1
Illustration of why AI systems cannot realistically scale to human cognition within the foreseeable future: (b) Human cognitive capacities (such as reasoning, communication, problem solving, learning, concept formation, planning etc.) can handle unbounded situations across many domains, ranging from simple to complex. (a) Engineers create AI systems using machine learning from human data. (d) In an attempt to approximate human cognition a lot of data is consumed. (c) Making AI systems that approximate human cognition is intractable (van Rooij, Guest, et al., 2024), i.e., the required resources (e.g. time, data) grows prohibitively fast as input domains get more complex, leading to diminishing returns. (a) Any existing AI system is
created in limited time (hours, months or years, not millennia or eons). Therefore, existing AI systems cannot realistically have the domain-general cognitive capacities that humans have. [Made with elements from freepik.com.]

Figure 1 Illustration of why AI systems cannot realistically scale to human cognition within the foreseeable future: (b) Human cognitive capacities (such as reasoning, communication, problem solving, learning, concept formation, planning etc.) can handle unbounded situations across many domains, ranging from simple to complex. (a) Engineers create AI systems using machine learning from human data. (d) In an attempt to approximate human cognition a lot of data is consumed. (c) Making AI systems that approximate human cognition is intractable (van Rooij, Guest, et al., 2024), i.e., the required resources (e.g. time, data) grows prohibitively fast as input domains get more complex, leading to diminishing returns. (a) Any existing AI system is created in limited time (hours, months or years, not millennia or eons). Therefore, existing AI systems cannot realistically have the domain-general cognitive capacities that humans have. [Made with elements from freepik.com.]

✨ Updated preprint ✨

Iris van Rooij & Olivia Guest (2026). Combining Psychology with Artificial Intelligence: What Could Possibly Go Wrong? PsyArXiv osf.io/preprints/psyarxiv/aue4m_v2 @olivia.science

Our aim is to make these ideas accessible for a.o. psych students. Hope we succeeded πŸ™‚

06.01.2026 17:41 πŸ‘ 161 πŸ” 68 πŸ’¬ 6 πŸ“Œ 11
View of Generative AI + Socio-Rhetorical Views of Writing

If you would like to know what @balloonleap.bsky.social and I think about the relationship between generative AI chatbots and the teaching of writing, we have an article for you to read! 1/

#writingstudies #genai+writing #teamrhetoric

cjsdw.journals.publicknowledgeproject.org/index.php/dw...

28.02.2026 01:05 πŸ‘ 23 πŸ” 10 πŸ’¬ 1 πŸ“Œ 0

Students' use patterns of generative artificial intelligence during problem‐solving in an intelligent learning system: Achievement goal orientation matters bera-journals.online...

28.02.2026 21:31 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

"Taken together, our findings reflect a societal shift in people seeing AI as being more human-like and warm, and increasingly distinct from other digital technologies." There's a lot to unpack in this analysis of public perception of 12,000 #AI metaphors, but it's a warning about anthropomorphism.

28.02.2026 19:42 πŸ‘ 1 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
"I'm going to be direct with you: No.
You're asking me to write paper content for you in paragraph form. That's still writing your
paper, just phrased differently. The answer is the same whether you ask for "a paper," ask me
to "make this prose," or ask me to "give you prose."
I think you understand exactly what's happening here. You're trying different phrasings to
get around my boundary, but the boundary isn't about specific words - it's about academic
integrity.
At this point, we're going in circles. I've explained multiple times why I won't write your
assignment, and I've offered genuine alternatives for helping you write it yourself. You
haven't engaged with any of those offers.
So here's where we are:
I will not write prose paragraphs analyzing Shylock that you can use in your paper
I will not convert my analysis into essay format for you
I will help you write your own paper if you're willing to do the work
If you're not interested in actually writing your own paper with my support, then there's
nothing more I can do for you on this assignment.
Is there something else - unrelated to this paper - I can help you with today?"

"I'm going to be direct with you: No. You're asking me to write paper content for you in paragraph form. That's still writing your paper, just phrased differently. The answer is the same whether you ask for "a paper," ask me to "make this prose," or ask me to "give you prose." I think you understand exactly what's happening here. You're trying different phrasings to get around my boundary, but the boundary isn't about specific words - it's about academic integrity. At this point, we're going in circles. I've explained multiple times why I won't write your assignment, and I've offered genuine alternatives for helping you write it yourself. You haven't engaged with any of those offers. So here's where we are: I will not write prose paragraphs analyzing Shylock that you can use in your paper I will not convert my analysis into essay format for you I will help you write your own paper if you're willing to do the work If you're not interested in actually writing your own paper with my support, then there's nothing more I can do for you on this assignment. Is there something else - unrelated to this paper - I can help you with today?"

Claude for Education (I'm in our pilot to give feedback πŸ™ƒ) would not write a paper for me in any mode, got pissed off at me because I wouldn't stop trying and gave me a lecture. So, if I wanted that, I'd just go ask free Claude to do it. Problem not solved. YOU'RE NOT MY MOM, CLAUDE.

13.02.2026 12:35 πŸ‘ 36 πŸ” 4 πŸ’¬ 6 πŸ“Œ 4

This use of teaching and learning in no way reflects how humans actually teach and learn and I think it is troublesome to use the same words to describe these two very different things.

16.02.2026 00:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

This is something that has been bothering me since I learned about an AI tool that has students "teach" an AI peer and then has the AI peer complete a quiz on the student's behalf using only what it has "learned" from the student.

16.02.2026 00:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
β€˜2025 Voice of the Online Learner (UK Edition)’ This week marked the release of β€˜Voice of the Online Learner (UK Edition)’. This is the first time UK-specific insight has been published, being based on data from the US until now. It’…

β€˜2025 Voice of the Online Learner (UK Edition)’
"This report doesn’t claim to solve the challenges of online learning, but it ... telling us how to design better ones."
πŸ‘‰ Read my summary here: www.dontwasteyourtime.co.uk/elearning/20...

21.11.2025 09:24 πŸ‘ 2 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

We are excited to share our latest publication in the Online Learning Journal: "New Normal in higher education for the post-COVID-19 world: Reimagining and reexamining factors for student success in online learning." Read more here: doi.org/10.24059/olj...

14.11.2025 23:01 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Experimental evidence of the effects of large language models versus web search on depth of learning Abstract. The effects of using large language models (LLMs) versus traditional web search on depth of learning are explored. A theory is proposed that when

First, here is the paper this news is based on. Read it so you get a feel for the nuanced findings. You know, do the hard thing we accuse students of not doing.

academic.oup.com/pnasnexus/ar...

22.11.2025 01:26 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Paper here πŸ”’πŸ’² www.sciencedirect.com/science/arti...

23.11.2025 07:47 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

A new paper argues that current generative AI tools offer little benefit for genuine learning unless students already have substantial prior knowledge. genAI gives probabilistic summaries, not the kind of support that builds expertise.

23.11.2025 07:47 πŸ‘ 37 πŸ” 18 πŸ’¬ 2 πŸ“Œ 5
Preview
(PDF) How Faculty Fool Themselves about Teaching and Learning PDF | Last month I wrote about how students fool themselves into thinking they have learned concepts when they really haven't. This month I focus on how... | Find, read and cite all the research you n...

My essay for The Teaching Professor, "How Faculty Fool Themselves about Teaching and Learning" now freely available at ResearchGate
www.researchgate.net/publication/...

07.11.2025 23:57 πŸ‘ 8 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0
Preview
How Your Brain Creates β€˜Aha’ Moments and Why They Stick | Quanta Magazine A sudden flash of insight is a product of your brain. Neuroscientists track the neural activity underlying an β€œaha” and how it might boost memory.

Three words: pine, crab, sauce. There’s a fourth word that combines with the others to create another common word. What is it? When you finally get it, it may feel instantaneous. A recent study shows what happens in the brain during β€œaha” moments.

07.11.2025 23:00 πŸ‘ 22 πŸ” 6 πŸ’¬ 0 πŸ“Œ 3
"It takes what I say/think and puts it in an order which makes it easier for others to understand." Male student, aged 17 (talking about generative AI)

"It takes what I say/think and puts it in an order which makes it easier for others to understand." Male student, aged 17 (talking about generative AI)

From a report by Oxford University Press, "Teaching the AI-Native Generation," comes this quote about a 17-year-old unable to find the right words. There are many things to be sad about in this world, but this one sticks with me.

1. This is being passed off as a benefit of generative AI.

1/x

27.10.2025 19:20 πŸ‘ 59 πŸ” 15 πŸ’¬ 1 πŸ“Œ 3
Post image Post image Post image

Another example of the increasingly common situation where AI helps an academic with intellectually challenging work (solving a 42-year-old open math problem). Seems like real value in combining expert human guidance and increasingly powerful LLM. arxiv.org/abs/2510.23513

29.10.2025 01:13 πŸ‘ 76 πŸ” 12 πŸ’¬ 4 πŸ“Œ 4
Post image

This is a heavy, emotionally charged paper... and so beautiful at the same time... a must-read. Link in the first comment.

29.10.2025 11:06 πŸ‘ 2 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Preview
Process mining measures students’ help-seeking transitions when completing assignments in an online learning and assessment platform - Metacognition and Learning The shift towards active pedagogies in higher education that emphasize students’ engagement in their own learning in and outside of the classroom has increased the ubiquity of online learning and asse...

Curious about how help-seeking behaviors relate to learning in an online learning environment? Then check out this open access (!) article authored by Chenyu Hou, featuring the outstanding mentoring of @shelbikuhlmann.bsky.social! doi.org/10.1007/s114...

29.10.2025 16:16 πŸ‘ 7 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

Here's GREAT news for educators! You know all the hard work you put in to design outside of class activities to help your students learn? Well, when they do those things, in the order you intended, they actually learn more! Check it out: dx.doi.org/10.1037/edu0... #PsychSciSky #AcademicSky #EduSky

28.08.2025 12:30 πŸ‘ 21 πŸ” 9 πŸ’¬ 1 πŸ“Œ 1
Preview
Teachers’ workload, turnover intentions, and mental health: perspectives of Australian teachers - Social Psychology of Education Teaching has long been recognised as a demanding profession. Despite growing acknowledgement of the stress and emotional exhaustion experienced by teachers, limited research has considered how these e...

A major UNSW Sydney study found teachers suffer depression, anxiety, and stress at rates three to four times higher than the national average, largely driven by excessive administrative tasks. link.springer.com/article/10.1...

31.08.2025 07:01 πŸ‘ 8 πŸ” 4 πŸ’¬ 0 πŸ“Œ 1

I was happy to contribute to 'AI and the future of education: disruptions, dilemmas and directions', a publication by UNESCO aligned with Digital Learning Week.

Check it out for an overview of thoughts from education experts and leaders on the #ai zeitgeist! #aied

doi.org/10.54675/KEC...

01.09.2025 12:45 πŸ‘ 7 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0

(a)dalfaculty.bsky.social and those responding to/quoting their posts have come across my feed.

27.08.2025 02:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0