Hi Katherine! The link gives me 404 I'm afraid.
Hi Katherine! The link gives me 404 I'm afraid.
One motivation for this paper:
Treating LLM outputs as static artifacts hides the dynamics that produce them: commitments, stabilizations, and path dependencies during generation.
Proto-interpretation is an attempt to name and study that middle ground.
We often describe LLMs as βnext-token predictors.β
That description is correct, and deeply insufficient.
In a new AI Letters paper, I argue for proto-interpretation: understanding inference as a temporally structured interpretive process.
dl.acm.org/doi/10.1145/...
Exactly, outputs are not invertible. Just hidden last layer activations before the linear transformation to token logits. I don't read this is saying that the weight carry information (such as proprietary data), but rather they form a structure making input-"output" 1-to-1.
Would love to hear your take on this. I think the mathematical analysis is interesting. But fail to see what interesting work it opens up. Not sure when you have access to internal activations but not the input. It's a transformer after all... It transforms.
LLM, Aletheia, and Poiesis. Offering yet another way to view LLMs that rejects the instrumentalist view.
rost.me/2025/07/18/l...
New post out about LLMs as relational. How language is a medium for thought, not a carrier of information.
rost.me/2025/07/15/L...
Weβre hosting a workshop at Aarhus 2025:
The End of Programming (as we know it)
Rethinking coding in the age of AIβbeyond productivity, toward new practices, tools & ideas.
π
Aug 19 (hybrid)
π Submit by June 27
π mi.sh.se/~shmnjn07/th...
Good question. To me, e.g. diffusion models output representations. LLMs on the other hand produce relational responses. That said, image generation can, through practice, become part of an interpretive, iterative process.
If it were just about generation, it could output random strings. But we care about what it outputs. We engage and respond (and so does it). That co-shaped process is interpretive, not merely generative.
Sure, the model doesnβt carry memory or intent. But thatβs not the point. Even if it were generating an endless transcript, the moment a human reads it, the meaning takes shape in relation to them. Even a single prompt-response is a relational act of sense-making.
Yes, the modelβs inference is stateless. But the turn-taking happens in the interaction, and itβs within that interaction that sense-making emerges. It's where the relational framing comes in. Interpretation isnβt something internal but itβs something that's enacted between human and model.
Generative put the emphasis on output, whereas Interpretive puts the emphasis on input.
Short post by @rost.me arguing that calling LLMs "generative AI" is misleading nowadays, since generating plausible text is really only one of many things they do (he proposes the term "interpretive AI"). rost.me/2025/05/27/i...
Qwq on stacking three eggs. It's showing a lot of humour in its reasoning and is certainly taking its time to reach a final solution. gist.github.com/rrostt/be0fa...
π€ From narratives of AI to its ethical landscape - @coeckelbergh.bsky.social takes us through a philosophical journey on the #HCAI Podcast. What does responsible #AI development really mean? π Tune in at hcai.se/podcast/24-1...
#ethicalAI #humancenteredAI #responsibleAI #aidemocracy
Arn't we all the dog?
Sad dog in space
Throwback to our episode on Human-centered AI-mediated Communication with @informor.bsky.social. Check out/listen to the full episode at hcai.se/podcast/24-0... #humanceteredai #humancetered #hcai #communication
Some skills, like doing my taxes or filling out travel reimbursement requests, I will happily leave for cognitive atrophy. Others, not so much. In this blog post, I ruminate on how HCAI tools should teach people to fish rather than feed them.
niklaselmqvist.medium.com/teach-a-man-...
We had a great discussion with @nelmqvist.bsky.social about how AI, HCI, and data visalization can work together to develop tools that support us. Check out the episode β¬οΈ
π What role does #AI play in amplifying and augmenting human abilities? @nelmqvist.bsky.social joins the #HCAI Podcast to discuss the intersection of data visualization, #HCI, and AI tools that empower, not replace, us.
π Listen now hcai.se/podcast/24-1...
#humancenteredai #dataviz
What have you tried using? I've never as much fun coding as when I assist it using LLMs.
Pilot hungover I'm sure... :)
Did they give a reason for cancelling?
New episode of the HCAI podcast! We had the privilege to speak to @j2bryson.bsky.social on governance, policymaking, regional control, and philosophy. You'll find links to the episode below #HCAI #AIAct #DSA #AIGovernance #AIPolicy #GDPR #CCPA #humancenteredartificialintelligence #humancenteredai
A few weeks ago, we had a great discussion with Natali Helberger about human-centricity, transparency, policy, and use of AI. The episode it out now. hcai.se/podcast/24-0...
#humancenteredartificialintelligence #hcai #transparency #transparentai #aigovernance
Last month, we had a great chat with #CornellTech's Mor Naaman about Human-centered AI-mediated communication. Listen to our conversation in the latest episode of the HCAI podcast hcai.se/podcast/24-0... #humancenteredartificialintelligence #aimediatecommunication #hcai
Having this podcast has been the best excuse ever to get to speak to people that do some of the most interesting work in #AI (even though our focus remains to be #HCAI). In the latest episode, we spoke to Ricardo Baeza-Yates - a data scientist since decades (before data science was a "thing").