HCI summer research opportunity 📣 My group has two openings for research assistants this summer, both in scientific tools for thought. Applicants are welcome at any level. Please help me share the news! andrewhead.info/positions/20...
HCI summer research opportunity 📣 My group has two openings for research assistants this summer, both in scientific tools for thought. Applicants are welcome at any level. Please help me share the news! andrewhead.info/positions/20...
Writing an HCI paper about an AI-powered system to a venue like UIST 2026 or CHI 2027? Wondering what reviewers expect you to report, and how to approach paper framing and writing? Check out our reporting guidelines: medium.com/p/7c3ae86341...
#CHI2026 program (the draft) is out: programs.sigchi.org/chi/2026/pro... a monster sized CHI that will definitely be fun and intellectually stimulating. Huge kudos to Pablo Cesar and Heloisa Candello, as well as our assistants for making this possible in such a short time ! Check it out!
To date, HCI researchers have had no support on signaling their paper's relevance to AI, esp. when that connection is tenuous at best. We introduce a systematic framework to ensure LLMs are mentioned at every stage of paper reporting—from framing, to evaluation, to implications.
Those are great topics. I am reacting more to papers that aren’t about AI directly but where authors shoehorn phrases like “LLM-based” into their titles and framing. If a topic is not directly about LLMs, then, it shouldn’t be pressured to answering “but what is the relevance for AI?”
Great idea. We might consider a similar thing in HCI. I'm getting really sick of AI and how seemingly every project must justify its existence by an overarching "As we all know, LLMs are transforming X..."
Did you have a qualitative paper rejected from #chi2026? As you're revising for your next submission, check out this crowdsourced document containing common critiques of qualitative research and ideas for responding -- and add any new critiques you've encountered:
docs.google.com/document/d/1...
I think I'd rather have an LLM review my paper.
companion robots are like zoom chats with the grandkids … it seems like progress, thank goodness it's there! yet, it just makes lack of community and real human connection that much easier to tolerate
www.nytimes.com/2026/02/12/u...
An OpenClaw agent makes a pull request to matplotlib. Mainter rejects PR. The OpenClaw agent authors a blog post accusing the maintainer of discrimination and gatekeeping. Maintainer responds theshamblog.com/an-ai-agent-...
Finally, a huge shout-out to my amazing co-authors, Karla Felix Navarro and Eugene Syriani at Université de Montréal, whose persistent and tireless effort and many conversations on paper framing were invaluable to this contribution. 😇
Looking to discussing this with the community at CHI 2026 #CHI2026. This was a very difficult paper to get past peer review, as it underwent heightened scrutiny. We thank our volunteering editors, participants, and reviewers, for their work and careful consideration.
Our paper also provides some common pitfalls for reviewers to consider, and suggestions for the HCI community, to better establish standards around this paper type and combat the variability of reviewer expectations.
From our findings and additional feedback from 6 expert HCI researchers, we propose eight reporting guidelines for authors reporting LLM-integrated systems in HCI, a more direct version of which is on our companion site: ianarawjo.github.io/Guidelines-f...
The term "LLM wrapper" is used to reject papers, even appearing in collected reviews verbatim, yet our participants held wildly inconsistent definitions, leading us to wonder how stable this judgment really is.
We encountered evidence of ongoing clashes between the norms and values of HCI and ML/NLP communities on what constitutes a strong contribution and appropriate level of technical rigor, as ML/NLP scholars publish papers in HCI venues or review for them.
Authors are backing away from claiming technical contributions, and opting for alternative strategies to frame their paper's contribution.
Authors also perceive unusually harsh and inconsistent reviewing standards for LLM papers, beyond baseline fluctuations in peer review.
Technical evaluations (outside user studies) are now largely expected when LLMs are central to claims—a shift from pre-LLM HCI norms of software without ML components.
Overall, we found that the uncertainty of LLMs has eroded traditional trust-building between authors and reviewers, with reviewers demanding more extensive details to validate systems and overcome skepticism caused by the uncertainty of LLM behavior and AI hype.
To tackle this question, we interviewed 18 HCI scholars who have built, reported, and reviewed LLM-integrated systems on their authoring and reviewing experiences. 📄 Pre-print here: arxiv.org/abs/2602.05128
As systems which integrate LLMs become increasingly the norm at HCI venues, how do HCI scholars approach papers that report them, whether as authors or reviewers? At UIST 2025, for instance, over 1 in 3 papers report LLM-integrated systems:
🎉 Thrilled to share that our paper "Reporting and Reviewing LLM-Integrated Systems in HCI: Challenges and Considerations" has been conditionally accepted to #CHI2026!
A thread 🧵
Thanks, Andrew!
Thanks, Marcelo!!
Notations/languages/etc (natural/constructed/formal/etc) sit at the intersection of a lot of my interests from information theory & statistics to Borgesian literature & group/individual cognitive ergonomics. Self-recommending. by @zamfi.bsky.social @damienhci.bsky.social @ianarawjo.bsky.social et al
Generative AI and Computing Education for Novices Generative AI (GenAI) has sowed havoc in computing education, especially introductory computing. As AI models grow in sophistication, it is increasingly difficult to find problems that cannot be comprehensively solved by AI. Even "defeat devices" like obscure programming languages have limited viability, due to technical reasons like one-shot learning, temporal reasons such as training, and basic educational considerations like what we want students to learn. I claim that the recent rise of agentic coding, in particular, is a major technological turning point that forces a deep re-examination of computing curricula. However, I come with a hopeful message rather than a pessimistic one. I believe there is a profound opportunity here for computing educators; the challenge is to not ignore these trends but rather understand how they can enhance us and what we have to offer. I will try to lay out all these issues. This is not a research talk, where I describe a set of crisp technical results that have been deeply vetted with proofs and experiments. Rather, in the spirit of the topic, I am trying to express a vibe. But I am not just idly speculating; I will base my comments on ongoing observations and experiments. I hope to spur lots of discussion, but also provide a sense of optimism. The one guarantee I do provide is that the talk content and slides were written entirely by a human.
New talk abstract dropping. I just hope I can write a talk to live up to it by *checks watch* *gulp* Monday.
Over the past week, I've seen a lot of CHI authors announcing their papers on social media as "accepted." I get that it's exciting, but your work is "conditionally" accepted—saying it's flat-out "accepted" is a bit of counting chickens before they hatch. Wait a bit for the official sign-off, folks!