Lumi Olteanu's Avatar

Lumi Olteanu

@lumiip

lawyer morphed to academic; teaching and researching IP law and other things; astronomy enthusiast. Assistant Professor @Warwick Law School

312
Followers
136
Following
39
Posts
26.09.2023
Joined
Posts Following

Latest posts by Lumi Olteanu @lumiip

I am planning to go too! Thanks for the reminder and first impressions

08.03.2026 18:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Until we go back to in person exams, we’re toast.

02.03.2026 08:11 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

some of you are looksmaxxing when you need to be booksmaxxing

18.02.2026 19:41 πŸ‘ 14709 πŸ” 2448 πŸ’¬ 211 πŸ“Œ 192
Preview
Czech Court denies copyright Protection of AI-Generated Work in First Ever Ruling In the first Czech case on copyright and AI, the Prague Municipal Court (β€œCourt”) has refused to recognise copyright protection of an AI-generated image. In a very brief decision, the Court found an a...

Thanks but the first European decision on copyright in AI outputs was the Czech one in 2024: www.twobirds.com/en/insights/...

Perhaps less known as it wasn’t decided in a β€˜Western’ jurisdiction but that never mattered to me. Still good to know.

17.02.2026 08:19 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

The unbearable sensation of nausea I get whenever I see the words β€˜patent’, β€˜AI’ and β€˜innovation’ in the same sentence…

12.02.2026 19:38 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Meta Is Blocking Links to ICE List on Facebook, Instagram, and Threads Users of Meta’s social platforms can no longer share links to ICE List, a website listing what it claims are the names of thousands of DHS employees.

Meta is blocking access to ICE List, which doesn't dox people but does identify some ICE agents But you can find it and contribute to it at wiki[dot]icelist[dot]is. Because people should know if they are hiring lawless thugs

www.wired.com/story/meta-i...

28.01.2026 22:48 πŸ‘ 141 πŸ” 82 πŸ’¬ 21 πŸ“Œ 10
Page 10-11 of the linked PDF

Page 10-11 of the linked PDF

On AI’s β€˜mediocrity trap’ β€” experiments indicates that while AI helps the less skilled make something passable, the highly skilled don’t use it to produce something better than they could have; they produce something ok, but lose motivation to make it great. www.jin-li.org/uploads/1/1/...

26.11.2025 21:18 πŸ‘ 190 πŸ” 70 πŸ’¬ 2 πŸ“Œ 7
Preview
OpenAI loses song lyrics copyright case in German court – DW – 11/11/2025 OpenAI lost a copyright infringement case in a lower German court for using popular song lyrics in its ChatGPT language model without paying royalties.

www.dw.com/en/openai-lo...

11.11.2025 18:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

SignedπŸ˜“

04.11.2025 21:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
After Patent Republic – Hyo Yoon Kang (University of Warwick) – ISHTIP

I am giving a talk in a week on β€œAfter Patent Republic” as part of the International Society for the History and Theory of IP @ishtip.bsky.social seminar series.

All welcome and you can register here (it’s online)

15.10.2025 07:14 πŸ‘ 9 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Bovino could be the inspiration for Colonel Lockjaw from One Battle After Another.

12.10.2025 10:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users β€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🀩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 πŸ‘ 3790 πŸ” 1897 πŸ’¬ 110 πŸ“Œ 390

Three years into the AI Wars and not a single company has been brought down by copyright litigation. By this time into the P2P Wars, Napster had already been obliterated. When will the people who were assured that copyright would destroy AI realise they were lied to?

09.10.2025 20:50 πŸ‘ 32 πŸ” 10 πŸ’¬ 2 πŸ“Œ 0
Preview
Munich Regional I Court preliminarily indicates OpenAI liable for copyright infringement of song lyrics. Decision on Nov. 11, 2025. Florian Mueller of AI Fray provides an excellent summary of the trial held by Landgericht MΓΌnchen I (Munich I Regional Court) in the lawsuit filed by GEMA, the collecting society representing autho…

Some good news from Germany

chatgptiseatingtheworld.com/2025/10/01/m...

01.10.2025 21:26 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Vacancy β€” PhD Candidate Law - Generative AI in the Media The overall goal of this project is to study the role and implications that the new regulatory framework has in realising public values and influencing the power dynamics and legal relationships in th...

🚨 PhD Vacancy – Generative AI in the Media 🚨

Exciting opportunity to join us on the @algosoc.org project, working w/ myself, @cgoanta.bsky.social & @natalihelberger.bsky.social

πŸ“… Closing date: 31 October 2025
πŸ”— More info: werkenbij.uva.nl/en/vacancies...
Β 
Cc: @ivir-uva.bsky.social

01.10.2025 07:47 πŸ‘ 7 πŸ” 6 πŸ’¬ 0 πŸ“Œ 0

Here lies human civilization. They were watching things on television that were different from what was happening.

29.09.2025 03:25 πŸ‘ 111 πŸ” 19 πŸ’¬ 4 πŸ“Œ 0
CURIA - Documents

Questionable reasoning in the digital age. Case remanded curia.europa.eu/juris/docume...

28.09.2025 17:38 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Austrian oil company OMV objects to its brand being associated with vagina care products πŸ€ͺ The oil company claims dilution over the β€˜OMV! By Vagisil’ word mark. The General Court found a link couldn’t be ruled out bc products might be sold at filling stations.

28.09.2025 17:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This article will feature on my course intro slides; ty

28.09.2025 12:17 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

SameπŸ₯Ή

24.09.2025 08:13 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

MISERABLE GHOSTS - bloody university rankings follow everything around like a bad smell

22.09.2025 08:50 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

I agree with everything you say. This poses broader questions about the public domain coming under attack (even more than it already is) if this such registrations are allowed.
It is crazy how some regard public domain items as res nullius or res derelictae rather than inalienable public property

19.09.2025 14:56 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

My mom called me last night about the Kimmel firing, saying "This is how Hitler got started!" I quickly responded, "By firing the late night tv hosts?"

Turns out, mom was right.

18.09.2025 20:27 πŸ‘ 5015 πŸ” 2300 πŸ’¬ 58 πŸ“Œ 79

Now this is what I call a textbook example of trade mark tarnishment where the US parody defence would not work :))

16.09.2025 22:32 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Cox on Normative Uncertainty and Legal AI Courtney M. Cox (Fordham University School of Law) has posted Non-Herculean Data: A Philosophical Intervention in a Technical Debate about Judicial Opinions as Data Sources (Proceedings of the 20th…

Cox on Normative Uncertainty and Legal AI, buff.ly/S3sT27b - Courtney M. Cox (Fordham University School of Law) has posted Non-Herculean Data: A Philosophical Intervention in a Technical Debate about Judicial Opinions as Data Sources (Proc. 20th Intern. Conf. on AI & Law (forthcoming 2025)) on SSRN.

16.09.2025 15:55 πŸ‘ 4 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
Preview
AI in law: breaking down the latest developments On 6 June 2025, the Divisional Court gave judgment in R (Ayinde) v. London Borough of Haringay [2025] EWHC 1383 (Admin). This was the first case that Court has considered concerning the misuse of arti...

It’s becoming endemic…

Here’s a break down up to July 2025 which will need to be updated regularly:

www.barcouncil.org.uk/resource/ai-...

15.09.2025 18:00 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Oxford University Press is introducing an AI summarisation & quizzing asst. into their law textbook β€˜trove’: it took considerable effort from our Fac+Library to have them engineer in a license-level off switch (they initially refused!). They did not clock how important the skill of summarisation is.

14.09.2025 12:00 πŸ‘ 104 πŸ” 31 πŸ’¬ 3 πŸ“Œ 6

Sorry to hear this Ali! Looking forward to reading your findings when the bureaucracy Gods will allow it…

03.09.2025 13:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

i wish brands would include more technical information in their product descriptions. i don't need to know this was inspired by timeless italian men summering in nantucket who could dress up or down. i want to know the fabric weight, leather tanning method, seam construction, etc.

01.09.2025 18:48 πŸ‘ 6787 πŸ” 372 πŸ’¬ 124 πŸ“Œ 41