Band and song of the day: Múr - Heimsslit
They deserve more listeners.
www.youtube.com/watch?v=UMS6...
Band and song of the day: Múr - Heimsslit
They deserve more listeners.
www.youtube.com/watch?v=UMS6...
Eine neue Folge Future Histories LIVE!
Diesmal spreche ich mit @annanosthoff.bsky.social (@criticaldatalab.bsky.social; Uni Oldenburg) über ihr neues Buch 'Kybernetik und Kritik: Eine Theorie digitaler Regierungskunst' (@suhrkamp.de).
Zu finden hier:
buff.ly/ljnfHXe
#FutureHistories #Podcast
Just reading Abraham Kuyper's inaugural address at the VU. What a great thinker he was!
We played a gig yesterday with Paper Knife at Hafenbahnhof in Hamburg. It was a great night!
www.youtube.com/watch?v=n98N...
I watched a bunch of AI sessions at the WEF so you wouldn’t have to.
But you should read this.
One way to solve this would be by funding R&I networks, not on a project basis but on a continuous basis that safeguards career prospects. See: www.youtube.com/watch?v=36mJ...
I like the idea! Yet, it's still premised on the project format; science moves fast and often we want to collaborate on slightly different themes.
Instead, funding could be continuous: www.youtube.com/watch?v=36mJ...
Song of the day: some great Irish music!
Let's take another round
www.youtube.com/watch?v=rhjZ...
Peer review is becoming highly demotivating due to AI generated papers. While waisting more of my time, I increasingly have the tendency to just decline review requests.
What a great music! Made in Germany
www.youtube.com/watch?v=JQul...
Maybe of interest for #ichbinhanna @kubon.bsky.social @amreibahr.bsky.social and @woinactie.bsky.social ?
Join the webinar on Continuity funding for research and innovation, on January 7, 4pm CET. Message me for a link!
Continuity funding is an alternative to current R&I funding focused on competitive projects, instead supporting sustainable science.
Read more: news.dyne.org/rethinking-r...
The irony of machine learning turning into this GenAI monstrosity is that it would be wonderful to have a tutor who can check my work while I self-learn at times when teachers need sleep. Instead GenAI is not only disenfranchising educators, but creating "diseducation" systems for experts.
🌟 Work With Us! 🌟
Stellen im DFG-Projekt „Prädiktives Wissen ist Macht: Ethik und Recht kollektiver Privatheit“:
▶️ 1x 75% Ph.D.-Stelle in Rechtswissenschaften (@hannahruschemeier.bsky.social)
▶️ 1x 100% Postdoc-Stelle in Philosophie (Rainer Mühlhoff).
Alles Infos: 👉 purposelimitation.ai/dfg-project/
Really glad to see this out! @apsmolenska.bsky.social and I give our account for @boell.de of why the future of the dollar is also crucial for climate policy
There’s a growing & deeply problematic tendency to use #genAI for political and historical education Take this example from Germany, which translates as: «Hey AI, who should I vote for.» What’s typical here is that AI is not only addressed as pseudo-person but also as a kind of impartial judge
1/
First experience of having to review an AI generated 'academic' paper... how bleak.
Research and innovation funding in Europe is broken. But, there is an alternative!
Together with @jaromil.bsky.social I'm working on an idea for continuity funding, for long-term, sustainable, and non-precarious R&I
Check it out here: news.dyne.org/rethinking-r...
I think, if anything, it shows how little universities are really 'ours,' as consultants often have more influence than collective decision-making.
Want a simple foundation from which to understand confusing topics like trade deficits, tariffs, and stablecoins? Check out my new video where I explain the international monetary system in under 15 minutes www.youtube.com/watch?v=dRHT...
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
New post 🚨 The Risk of More Reliable AI briandavidearp.substack.com/p/the-risk-o... -- Maybe hallucinations are good?
A typical financial market has *fundamental* traders who watch the world and *technical* traders who watch other traders. A striking feature of the Bitcoin market is the almost total absence of the former. This has increasingly serious implications for all of us www.asomo.co/p/the-whale-...
🤖 In other news, it seems all those influencers telling you your workplace HAS to get on the AI bandwagon or will be left behind don't wholly know what they're talking about.
fortune.com/2025/08/18/m...
This is in the replies and it demonstrates a real issue which is if you find profundity from an AI, you’re responding to something profound a human being wrote that was vacuumed up and what ends up happening is you thank AI and not only don’t thank the person, you have no idea they were involved
Many EU member states are arguing for forcing WhatsApp to inspect all our photos w/AI. If the AI is in any “doubt” if it might be child pornography, your photo, location & other details get reported to Europol and a local police force. This is a terrible plan: berthub.eu/articles/pos... #chatcontrol
Das sechsteilige Feature "Tech Bro Topia" über die Ideologien der Tech-Milliardäre (von Musk bis Andreessen, vom Longtermism bis zum #eacc) ist nun online @deutschlandfunk.de.web.brid.gy. Es war eine Freude, dabei zu sein – vielen Dank für die Einladung: www.deutschlandfunk.de/tech-bro-top...
I've just tried Google's NotebookLM and though I've never been too impressed by ChatGPT and the like, this seems to change a lot.
It can write original papers, grant applications, you name it; accurately because it only cross-references the texts you add.
How to deal with this?