AI has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge AI systems and those without. www.nytimes.com/interactive/...
AI has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge AI systems and those without. www.nytimes.com/interactive/...
Just finished reading this astonishingly thoughtful and beautifully written reflection on what is left for the humanities after AI. Highly recommend: www.newyorker.com/culture/the-...
this is a great summary of the diluting effect of chatgpt's new image generation capabilities on the distinctive aesthetic of beloved Japanese animation house, Studio Ghibli #AI #StudioGhibli #ChatGPT www.theintrinsicperspective.com/p/welcome-to...
this is a really useful resource for tracking licensing of scholarly content for training AI models: sr.ithaka.org/our-work/gen...
The Court says that its opinion is a one-off because of TikTok's scale, data-collection practices, and susceptibility to foreign manipulation, but all of the other major platforms are very big, collect the same data, and are susceptible to strong-arming by foreign authoritarians. /1
Now that the Supreme Court has upheld the TikTok divestment-or-ban law, what happens next? I have an explainer for @lawfare.bsky.social laying out the next steps. www.lawfaremedia.org/article/the-...
Here's the On Point episode with me and Brewster Kahle talking about the copyright lawsuits against the Internet Archive.
www.wbur.org/onpoint/2025...
Just filed: brief of @knightcolumbia.org @freepress.bsky.social @penamerica.bsky.social in support of TikTok and TikTok's users. www.supremecourt.gov/DocketPDF/24...
Today's Lawfare Daily is from a conference co-hosted by Lawfare and the Georgetown Institute for Law and Technology, where Chinmayi Sharma moderated a panel on "Old Laws, New Tech: How Traditional Legal Doctrines Tackle AI,β between Catherine Sharkey, Bryan Choi, and @katgeddes.bsky.social.
People sometimes make fun of science that sounds stupid and random.
Meanwhile, a study of lizard saliva turned into a peptide medication, which was turned into a diabetes medication, which was turned into a GLP1 weight loss drug, that just became the first therapy every approved for β¦ sleep apnea
Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. But are these terms truly meaningful? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: we think that the legal enforceability of these licenses is questionable. This Article systematically assesses of the enforceability of AI model terms of use and offers three contributions. First, we pinpoint a key problem: the artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed. Second, we examine the problems this creates for other enforcement. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the DMCA and CFAA offer limited recourse. Anti-competitive provisions likely fare even worse than responsible use provisions. Third, we provide recommendations to policymakers. There are compelling reasons for many provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: model creators have fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI, restricting the latter.
Check out my new piece on AI terms of use restrictions w/ Mark Lemley ( @marklemley.bsky.social ).
There's been a recent stir about terms of use restrictions on AI outputs & models. We dig into the legal analysis, questioning their enforceability.
Link: papers.ssrn.com/sol3/papers....
folks, weβre bringing back nuclear power to achieve this level of quality
AI is an inherently authoritarian way of making decisions.
In @techpolicypress.bsky.social, βͺ@alicetiara.bsky.social and @kevindeliban.bsky.social explain why using AI in the name of βgovernment efficiencyβ is likely to create more problems than it solves β with vulnerable communities paying the steepest price. www.techpolicy.press/ai-cant-solv...
Challah shaped like an alligator
I just can't believe you think this social media is boring when it features The Challigator
π’NEW: 'Open' AI systems aren't open. The vague term, combined w frothy AI hype is (mis)shaping policy & practice, assuming 'open source' AI democratizes access & addresses power concentration. It doesn't.
@smw.bsky.social, @davidthewid.bsky.social & I correct the recordπ
nature.com/articles/s41...
The first is which platforms will be subject to the law. Albanese has said that TikTok, Instagram, Snapchat and X will be covered. But itβs not clear when another platform might meet the criteria and be required to jettison its under-16 user base. Miraculously (for Google), YouTube was granted an exemption to the bill as a service that is βhealth and education related.β And while YouTube surely does more than its share of educating, its primary use is for entertainment. And YouTubeβs embrace of short-form video through Shorts effectively makes it a one-to-one competitor with TikTok, Snap, and Instagram. In one stroke of his pen, Albanese may have just granted YouTube permanent dominance in short-form video to a generation of Australians.
Australia's ban on social media for teens under 16 is, among other things, an enormous gift to YouTube www.platformer.news/australia-so...
What will enter the #publicdomain in 2025? Each day through December weβll open a window in our advent-style calendar to reveal our highlights! publicdomainreview.org/features/ent...
(+ for the impatient/curious we've links to lists of new entrants.)
A table indicating the affiliation each of the publishers in our dataset have with OpenAI, whether the publisherβs content was accessible to OpenAIβs search crawler through their βrobots.txtβ file, and the accuracy of ChatGPT in referencing their content.
Ultimately, what we found is that ChatGPT search offers publishers the illusion of control. No publisher β regardless of degree of affiliation with OpenAI β was spared from inaccurate representations of its content. (7/9)
Interesting that licensing deals don't seem to make a difference
yes the latest version is here! papers.ssrn.com/sol3/papers....
One enduring complication with all this is that scraping happens all the time for reasons that people *donβt* find inherently objectionable, and in fact supportβthe Wayback Machine, all kinds of public health and extremism research, etc. The mistake was assuming that goodwill transfers.
A wonderful paper, very worth reading!!!
The unauthorized removal of copyright management information (CMI) from copyrighted works as part of AI training (although a statutory violation) does not produce concrete harm sufficient for standing in the absence of dissemination: www.wired.com/story/opena-...
Parents: lying is bad
Also parents: if the ticket guy asks, you're 11
This is one of the most succinct summaries of why copyright cannot solve all of our ethical concerns around AI labor displacement that I have read to date; highly recommend reading: papers.ssrn.com/sol3/papers.... @caryscraig.bsky.social
tldr: my job talk paper (see below) argues that if the "public benefit" of generative AI (see 4th fair use factor) is that it democratizes cultural production, then AI vendors should stop preventing users from engaging in fair uses of copyrighted works >> papers.ssrn.com/sol3/papers....