Kat Geddes's Avatar

Kat Geddes

@katgeddes

Postdoc @NYU Law and Cornell Tech. Writing about generative AI, copyright, and tech law >> katrinageddes.com

930
Followers
259
Following
14
Posts
19.11.2024
Joined
Posts Following

Latest posts by Kat Geddes @katgeddes

Preview
The A.I. Race Is Splitting the World Into Haves and Have-Nots As countries race to power artificial intelligence, a yawning gap is opening around the world.

AI has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge AI systems and those without. www.nytimes.com/interactive/...

25.06.2025 19:25 πŸ‘ 7 πŸ” 5 πŸ’¬ 1 πŸ“Œ 2
Preview
Reddit sues AI company Anthropic for allegedly 'scraping' user comments to train chatbot Claude Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally β€œscraping” the comments of millions of Reddit users to train its chatbot Cla...

apnews.com/article/redd...

05.06.2025 15:06 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
DeepSeek may have used Google's Gemini to train its latest model | TechCrunch Chinese AI lab DeepSeek released an updated version of its R1 reasoning model that performs well on a number of math and coding benchmarks. Some AI researchers speculate that at least a portion came f...

techcrunch.com/2025/06/03/d...

04.06.2025 16:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Will the Humanities Survive Artificial Intelligence? Maybe not as we’ve known them. But, in the ruins of the old curriculum, something vital is stirring.

Just finished reading this astonishingly thoughtful and beautifully written reflection on what is left for the humanities after AI. Highly recommend: www.newyorker.com/culture/the-...

14.05.2025 15:20 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Welcome to the semantic apocalypse Studio Ghibli style and the draining of meaning

this is a great summary of the diluting effect of chatgpt's new image generation capabilities on the distinctive aesthetic of beloved Japanese animation house, Studio Ghibli #AI #StudioGhibli #ChatGPT www.theintrinsicperspective.com/p/welcome-to...

01.04.2025 16:05 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Generative AI Licensing Agreement Tracker - Ithaka S+R In recent months, several publishers have announced that they are licensing their scholarly content for use as training data for LLMs. These deals

this is a really useful resource for tracking licensing of scholarly content for training AI models: sr.ithaka.org/our-work/gen...

14.02.2025 16:55 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The Court says that its opinion is a one-off because of TikTok's scale, data-collection practices, and susceptibility to foreign manipulation, but all of the other major platforms are very big, collect the same data, and are susceptible to strong-arming by foreign authoritarians. /1

17.01.2025 16:05 πŸ‘ 51 πŸ” 16 πŸ’¬ 2 πŸ“Œ 4
Preview
The Supreme Court Rules Against TikTokβ€”Now What? TikTok is teetering on the brink of a nationwide shutdown, and Trump has few good options to intervene.

Now that the Supreme Court has upheld the TikTok divestment-or-ban law, what happens next? I have an explainer for @lawfare.bsky.social laying out the next steps. www.lawfaremedia.org/article/the-...

17.01.2025 15:11 πŸ‘ 31 πŸ” 14 πŸ’¬ 3 πŸ“Œ 2
Preview
The Internet Archive is in danger More than 900 billion webpages are preserved on The Wayback Machine, a history of humanity online. Now, copyright lawsuits could wipe it out.

Here's the On Point episode with me and Brewster Kahle talking about the copyright lawsuits against the Internet Archive.

www.wbur.org/onpoint/2025...

08.01.2025 18:10 πŸ‘ 24 πŸ” 9 πŸ’¬ 2 πŸ“Œ 0
Post image

Just filed: brief of @knightcolumbia.org @freepress.bsky.social @penamerica.bsky.social in support of TikTok and TikTok's users. www.supremecourt.gov/DocketPDF/24...

27.12.2024 17:49 πŸ‘ 29 πŸ” 13 πŸ’¬ 0 πŸ“Œ 1
Preview
Lawfare Daily: Old Laws, New Tech: How Traditional Legal Doctrines Tackle AI Listen to a conference panel on AI liability. 

Today's Lawfare Daily is from a conference co-hosted by Lawfare and the Georgetown Institute for Law and Technology, where Chinmayi Sharma moderated a panel on "Old Laws, New Tech: How Traditional Legal Doctrines Tackle AI,” between Catherine Sharkey, Bryan Choi, and @katgeddes.bsky.social.

26.12.2024 14:22 πŸ‘ 46 πŸ” 11 πŸ’¬ 0 πŸ“Œ 1

People sometimes make fun of science that sounds stupid and random.

Meanwhile, a study of lizard saliva turned into a peptide medication, which was turned into a diabetes medication, which was turned into a GLP1 weight loss drug, that just became the first therapy every approved for … sleep apnea

21.12.2024 00:41 πŸ‘ 17270 πŸ” 5104 πŸ’¬ 373 πŸ“Œ 351
Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. But are these terms truly meaningful? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: we think that the legal enforceability of these licenses is questionable.

This Article systematically assesses of the enforceability of AI model terms of use and offers three contributions. First, we pinpoint a key problem: the artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed. Second, we examine the problems this creates for other enforcement. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the DMCA and CFAA offer limited recourse. Anti-competitive provisions likely fare even worse than responsible use provisions. Third, we provide recommendations to policymakers. There are compelling reasons for many provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: model creators have fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI, restricting the latter.

Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. But are these terms truly meaningful? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: we think that the legal enforceability of these licenses is questionable. This Article systematically assesses of the enforceability of AI model terms of use and offers three contributions. First, we pinpoint a key problem: the artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed. Second, we examine the problems this creates for other enforcement. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the DMCA and CFAA offer limited recourse. Anti-competitive provisions likely fare even worse than responsible use provisions. Third, we provide recommendations to policymakers. There are compelling reasons for many provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: model creators have fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI, restricting the latter.

Check out my new piece on AI terms of use restrictions w/ Mark Lemley ( @marklemley.bsky.social ).

There's been a recent stir about terms of use restrictions on AI outputs & models. We dig into the legal analysis, questioning their enforceability.

Link: papers.ssrn.com/sol3/papers....

10.12.2024 00:39 πŸ‘ 39 πŸ” 15 πŸ’¬ 0 πŸ“Œ 2

folks, we’re bringing back nuclear power to achieve this level of quality

11.12.2024 17:36 πŸ‘ 577 πŸ” 133 πŸ’¬ 42 πŸ“Œ 5

AI is an inherently authoritarian way of making decisions.

05.12.2024 16:14 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

In @techpolicypress.bsky.social, β€ͺ@alicetiara.bsky.social and @kevindeliban.bsky.social explain why using AI in the name of β€œgovernment efficiency” is likely to create more problems than it solves β€” with vulnerable communities paying the steepest price. www.techpolicy.press/ai-cant-solv...

10.12.2024 16:15 πŸ‘ 8 πŸ” 6 πŸ’¬ 0 πŸ“Œ 0
Challah shaped like an alligator

Challah shaped like an alligator

I just can't believe you think this social media is boring when it features The Challigator

06.12.2024 02:00 πŸ‘ 24 πŸ” 6 πŸ’¬ 2 πŸ“Œ 0
Why β€˜open’ AI systems are actually closed, and why this matters - Nature A review of the literature on artificial intelligence systems to examine openness reveals that open AI systems are actually closed, as they are highly dependent on the resources of a few large corpora...

πŸ“’NEW: 'Open' AI systems aren't open. The vague term, combined w frothy AI hype is (mis)shaping policy & practice, assuming 'open source' AI democratizes access & addresses power concentration. It doesn't.

@smw.bsky.social, @davidthewid.bsky.social & I correct the recordπŸ‘‡
nature.com/articles/s41...

02.12.2024 14:23 πŸ‘ 958 πŸ” 335 πŸ’¬ 25 πŸ“Œ 36
The first is which platforms will be subject to the law. Albanese has said that TikTok, Instagram, Snapchat and X will be covered. But it’s not clear when another platform might meet the criteria and be required to jettison its under-16 user base.

Miraculously (for Google), YouTube was granted an exemption to the bill as a service that is β€œhealth and education related.” And while YouTube surely does more than its share of educating, its primary use is for entertainment. And YouTube’s embrace of short-form video through Shorts effectively makes it a one-to-one competitor with TikTok, Snap, and Instagram. In one stroke of his pen, Albanese may have just granted YouTube permanent dominance in short-form video to a generation of Australians.

The first is which platforms will be subject to the law. Albanese has said that TikTok, Instagram, Snapchat and X will be covered. But it’s not clear when another platform might meet the criteria and be required to jettison its under-16 user base. Miraculously (for Google), YouTube was granted an exemption to the bill as a service that is β€œhealth and education related.” And while YouTube surely does more than its share of educating, its primary use is for entertainment. And YouTube’s embrace of short-form video through Shorts effectively makes it a one-to-one competitor with TikTok, Snap, and Instagram. In one stroke of his pen, Albanese may have just granted YouTube permanent dominance in short-form video to a generation of Australians.

Australia's ban on social media for teens under 16 is, among other things, an enormous gift to YouTube www.platformer.news/australia-so...

03.12.2024 01:40 πŸ‘ 503 πŸ” 76 πŸ’¬ 17 πŸ“Œ 9
Post image

What will enter the #publicdomain in 2025? Each day through December we’ll open a window in our advent-style calendar to reveal our highlights! publicdomainreview.org/features/ent...

(+ for the impatient/curious we've links to lists of new entrants.)

01.12.2024 17:11 πŸ‘ 98 πŸ” 70 πŸ’¬ 2 πŸ“Œ 7
A table indicating the affiliation each of the publishers in our dataset have with OpenAI, whether the publisher’s content was accessible to OpenAI’s search crawler through their β€œrobots.txt” file, and the accuracy of ChatGPT in referencing their content.

A table indicating the affiliation each of the publishers in our dataset have with OpenAI, whether the publisher’s content was accessible to OpenAI’s search crawler through their β€œrobots.txt” file, and the accuracy of ChatGPT in referencing their content.

Ultimately, what we found is that ChatGPT search offers publishers the illusion of control. No publisher – regardless of degree of affiliation with OpenAI – was spared from inaccurate representations of its content. (7/9)

27.11.2024 19:36 πŸ‘ 91 πŸ” 39 πŸ’¬ 4 πŸ“Œ 7

Interesting that licensing deals don't seem to make a difference

29.11.2024 19:38 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
How Art Became Posthuman: Copyright, AI, and Synthetic Media In response to the threats posed by new copy-reliant technologies, copyright law often expands in scope. Frequently this results in overzealous rights enforceme

yes the latest version is here! papers.ssrn.com/sol3/papers....

28.11.2024 18:18 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

One enduring complication with all this is that scraping happens all the time for reasons that people *don’t* find inherently objectionable, and in fact supportβ€”the Wayback Machine, all kinds of public health and extremism research, etc. The mistake was assuming that goodwill transfers.

27.11.2024 14:05 πŸ‘ 291 πŸ” 41 πŸ’¬ 7 πŸ“Œ 4

A wonderful paper, very worth reading!!!

27.11.2024 16:00 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
OpenAI Scored a Legal Win Over Progressive Publishersβ€”but the Fight’s Not Finished A judge tossed out a case against OpenAI brought by Alternet and Raw Story, in what could be a significant ruling in the larger battle between AI companies and publishers.

The unauthorized removal of copyright management information (CMI) from copyrighted works as part of AI training (although a statutory violation) does not produce concrete harm sufficient for standing in the absence of dissemination: www.wired.com/story/opena-...

26.11.2024 19:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Parents: lying is bad

Also parents: if the ticket guy asks, you're 11

21.11.2024 16:11 πŸ‘ 556 πŸ” 85 πŸ’¬ 10 πŸ“Œ 4
The AI-Copyright Trap As AI tools proliferate, policy makers are increasingly being called upon to protect creators and the cultural industries from the extractive, exploitative, and

This is one of the most succinct summaries of why copyright cannot solve all of our ethical concerns around AI labor displacement that I have read to date; highly recommend reading: papers.ssrn.com/sol3/papers.... @caryscraig.bsky.social

21.11.2024 21:26 πŸ‘ 8 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
How Art Became Posthuman: Copyright, AI, and Synthetic Media In response to the threats posed by new copy-reliant technologies, copyright law often expands in scope. Frequently this results in overzealous rights enforceme

tldr: my job talk paper (see below) argues that if the "public benefit" of generative AI (see 4th fair use factor) is that it democratizes cultural production, then AI vendors should stop preventing users from engaging in fair uses of copyrighted works >> papers.ssrn.com/sol3/papers....

20.11.2024 20:54 πŸ‘ 13 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0