Teddy Roland's Avatar

Teddy Roland

@teddyroland

Postdoc @ School of Information Sciences, UIUC American Literature, Media Theory, Data Science I publish under "Edwin Roland" but don't tell anyone.

364
Followers
159
Following
540
Posts
11.11.2024
Joined
Posts Following

Latest posts by Teddy Roland @teddyroland

It would be pretty surprising if autoregressive models performed otherwise.

That said, I share your concern and I am in the process of re-running the test with the first gen GPT model (the contemporary of BERT).

I will report results…. on a shorter publication timeline than this article!

11.03.2026 19:18 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yep. That’s right that the experiments are based on BERT, ie an encoder, rather than a decoder unit.

The short answer is that our findings are likely to agree directionally with similar tests on a decoder unit, like GPT. Indeed, they agree with non-LLM computational research on fiction

11.03.2026 19:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Generative AI & Fictionality: How Novels Power Large Language Models Generative models, like the one in ChatGPT, are powered by their training data. The models are simply next-word predictors, based on patterns learned from vast amounts of pre-existing text. Since the ...

New paper w/ @teddyroland.bsky.social on "How fiction powers generative AI systems." We designed a computational experiment to test the impact of the vast amount of fiction in LLM training data on how LLMs communicate, w/ implications for both AI design + literary theory. arxiv.org/abs/2603.01220

11.03.2026 16:58 πŸ‘ 15 πŸ” 8 πŸ’¬ 2 πŸ“Œ 1

Yes! Walking to school is one of the great pleasures of our time in Champaign

11.03.2026 12:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Starting any new job -- never mind founding a startup -- after being in academia will have a steep learning curve. They will require new skills and compel connections with other people that open new scenarios, many unforeseeable, at lightning speed.

10.03.2026 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Book revisions -- even if they don't lead to a TT job -- eventually give exposure to the publishing industry (with book pitches, editorial meetings, marketing, etc). It has the potential to branch into different scenarios in cultural production and business, but like years down the line.

10.03.2026 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yes, lets!

I think you're exactly right with the "multiple scenarios instead of just one." I'll add a level to it that its a question of *when* different scenarios get activated.

10.03.2026 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Which is a riskier way to spend my time: revising book chapters in anticipation of a TT job that may never materialize, or writing a business plan for a startup that may never get off the ground?

10.03.2026 15:42 πŸ‘ 9 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Preview
A search index of expert podcasts, videos and essays. Explore our curated database of essays, podcasts, and videos authored by the world's leading experts and designed for the public.

Extraordinary new public humanities resource that curates in near real-time over 50,0000 items from sources/channels of humanities-related scholars who create pubic-facing essays, podcasts, videos, blogs, etc.: publicscholarship.org

09.03.2026 21:29 πŸ‘ 45 πŸ” 26 πŸ’¬ 1 πŸ“Œ 2
Preview
Generative AI & Fictionality: How Novels Power Large Language Models Generative models, like the one in ChatGPT, are powered by their training data. The models are simply next-word predictors, based on patterns learned from vast amounts of pre-existing text. Since the ...

Lots more findings and discussion in the pre-print.

And please, share your feedback with us! The article has been accepted for publication at New Literary History, and we consider the pre-print to be part of our peer review process. Thx ✌️

arxiv.org/abs/2603.01220

09.03.2026 16:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

In order to make sense of these findings, we interpret them by way of recent literary theory on the nature of fictionality. This helps us (1) answer why characters should feature prominently in LLMs learning from novels and (2) articulate a theory of representation for the AI era.

09.03.2026 16:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

One major finding is that an early model -- BERT -- learns about gender as a dialogical construction between characters.

More surprising is that the relations are characterized by heightened affect: gender differences are represented through anger, confusion, erotics in high stakes situations.

09.03.2026 16:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We know that current AI models train on large swaths of novels, thanks to reporting in The Atlantic.

Richard and I trace fiction's role back the first generation LLMs and we empirically test: What do AI models actually learn from fiction, as opposed to non-fiction sources like Wikipedia?

09.03.2026 16:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In the paper, we argue that training data is a new and consequential frontier for cultural representation.

It is a truism that LLMs only know about the world what they learn from their training data, but the insight is rarely tested and, as a result, not well theorized. We aim to rectify that.

09.03.2026 16:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Generative AI & Fictionality: How Novels Power Large Language Models Generative models, like the one in ChatGPT, are powered by their training data. The models are simply next-word predictors, based on patterns learned from vast amounts of pre-existing text. Since the ...

Excited to share the pre-print for a forthcoming article in NLH with @richardjeanso.bsky.social πŸŽ‰

Generative AI & Fictionality: How Novels Power Large Language Models

arxiv.org/abs/2603.01220

09.03.2026 16:41 πŸ‘ 10 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Original post on scholar.social

A dh tutorial web app

tl;dr: I build little wonky things to help with my own teaching and research. If all goes well, they turn out to be useful/interesting for other people too. This post is in the spirit of that sharing. A worsening problem I am having is an overall decline in basic digital […]

04.03.2026 19:05 πŸ‘ 3 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

2016: entertaining myself by watching -LL/token go down in MALLET sessions

2026: entertaining myself by watching Claude Code google keywords from my prompt

05.03.2026 02:22 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Fired up the old blog for this post. Glad to see DH Now is still kicking!

04.03.2026 17:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Wowza. Well, I stand by my initial assessment: that post is a litany of broken things in academia (with varying degrees of self-awareness)

03.03.2026 15:51 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

A pop discourse that represents the imaginary relationship of college students to their real conditions of trudgery!

03.03.2026 15:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

As a tea drinker, the energy I use to boil water each month probably matches that of driving a car ten miles. Wild!

03.03.2026 15:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I get the sense you are looking for disruptions that are real β€œcreative destruction” β€” interventions that reorganize the playing field. Any eg’s you think are (potentially) transformative?

Maybe notebook LM has gotten closest so far, as a diff way to engage library holdings/course readings

03.03.2026 15:08 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The flip side is that insights about β€œthe way things work” tells you about what is broken: e.g. Einstein directed attention to the terrible incentives students have in the ed system.

03.03.2026 15:08 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image
03.03.2026 13:49 πŸ‘ 14 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

He β€œdigesteth harde yronne” (takes a D3+iron supplement)

01.03.2026 13:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
β€˜A joyful day’: final piece of Sagrada Familia’s central tower put in place Completion of glass cross brings Antoni Gaudí’s church to maximum final height of 172.5m, 144 years after work began

Hold the phone! La Sagrada Familia finished being built last week

www.theguardian.com/world/2026/f...

28.02.2026 13:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
ChatGPT cuts in action

DOGE allegedly used ChatGPT to identify 1,400 NEH grants it said were DEI. Grants were terminated April 2025, according to a court filing. E.g.

Film: 1873 Colfax massacre
Film: first female pilots flying for U.S. military in WWII
Film: β€œUntold Story of Jewish Women Slave Labor in the Holocaust"

13.02.2026 23:49 πŸ‘ 1401 πŸ” 622 πŸ’¬ 35 πŸ“Œ 64

But the elision of code work, when we offload it to AI, puts pressure on the code's ultimate purpose, its end. At some level, that's all that prompting consists of: naming goals for AI agents to pursue.

The goals are weird overdetermined mixtures of ethical, aesthetic, and analytic dispositions

26.02.2026 14:19 πŸ‘ 0 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

I don't say it quite this way in the post, but my hunch is that we, as users, are only partly conscious of our mixture of goals for any project, and that reading chat transcripts of vibe coding will reveal what it actually consists of.

The transcripts themselves may be AI's real value to humanists!

26.02.2026 14:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

But the elision of code work, when we offload it to AI, puts pressure on the code's ultimate purpose, its end. At some level, that's all that prompting consists of: naming goals for AI agents to pursue.

The goals are weird overdetermined mixtures of ethical, aesthetic, and analytic dispositions

26.02.2026 14:19 πŸ‘ 0 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0