You can’t make this stuff up. “Extremely personalised” has never felt more threatening (and invasive, given the ‘talk to GPT about anything in your life and world‘ discourse). Seems likely that a ‘personalized‘ GPT sales-bot is going to be shoved down our throats. How unimaginative.
29.09.2025 10:57
👍 1
🔁 0
💬 1
📌 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
06.09.2025 08:13
👍 3791
🔁 1897
💬 110
📌 390
Hahaha no doubt!
25.05.2025 12:57
👍 0
🔁 0
💬 0
📌 0
Let me get this straight: I’m using a messaging app. Someone thought it would be a good idea to make the app begin to spam me with ‘useful tips’. But they’re also like you’re probably going to hate this so we are going to give you an option to block us. What was the point of this exercise?! Madness.
24.05.2025 13:07
👍 2
🔁 0
💬 1
📌 0
Digital Metaphors and Algorithmic Myths in the Past, Present, and Future
All welcome to our event with Audrey Borowski and Vassilis Galanos on the role of mythmaking and metaphors in contemporary digital culture!
🎆 Are you interested in the role of mythmaking and metaphors in contemporary and historical research and deployment of AI? Join us on 6th June, 1-2.30pm, for exciting talks by Dr Audrey Borowski and @velocitygravity.bsky.social, followed by a Q&A
🖊️ sign up:
www.eventbrite.co.uk/e/digital-me...
16.05.2025 14:40
👍 5
🔁 4
💬 0
📌 3
Congratulations Benedetta!! 👏 Wonderful to see this published!
08.05.2025 19:12
👍 1
🔁 0
💬 0
📌 0
Such fun to be surprised by libraries. Picked up a Tintin to sink into some old favorites and I guess I’m now going to be familiarizing myself with Scots!
26.04.2025 11:34
👍 2
🔁 0
💬 0
📌 0
When I eventually defend my thesis I want to channel the confidence of my 4yo pulling out a drawing from inside her shoe. Why did you put it in there? “Just because”. Why would you choose to fold it into your sock?! “Because I could!!”. Exasperated huff. Mike drop.
03.04.2025 19:42
👍 3
🔁 0
💬 0
📌 0
Green to blue gradient image with text reading: 'PhD Studentship: Examining the ethical implications of natural language processing. Supervised by Dr Zeerak Talat, this project will consider how NLP and machine learning are currently falling short in engaging with ethical development practices and examine how such practices can be improved for the benefit of end-users and society more broadly. Application Deadline: 15 March 2025
📢 Closing this week! 📢
Supervised by @zeerak.bsky.social and starting Sept 2025, the project will examine the ethical implications of natural language processing.
Apply now ▶️ edin.ac/40PAXEq
11.03.2025 12:40
👍 11
🔁 6
💬 1
📌 0
Gotta appreciate the straight-up imperialist ask here. None of that “we are the savior and here for your own good” facade. We know you’re at war but we want your minerals. Do you like your minerals or do you like your internet infrastructure? Choose.
24.02.2025 20:02
👍 0
🔁 0
💬 0
📌 0
Calling my dentist’s office and having to “talk” to a self-professed AI who literally yells “I’m here to HELP!” definitely tops my bizarre tech interactions list. Poor Emily the AI couldn’t quite tell when I’d finished speaking. Im confused, it is confused. Unplanned inefficiency at its best.
21.02.2025 17:14
👍 0
🔁 0
💬 0
📌 0
“To err is human but to mess things up seriously one needs a computer”. Loving how on the nose Yanis Vanoufakis’ Technofeudalism is. This is going to be my tongue in cheek response to folk who tell me tech/AI/(latest phrase) in xyz domain is not problematic because “people are flawed too”.
15.02.2025 09:55
👍 2
🔁 0
💬 0
📌 0
Text on a blue background with the details of the Red-Teaming Generative AI Harm event on February 20.
On Thursday, Feb 20 @ 1 pm ET, Lama Ahmad, @camillefrancois.bsky.social, Tarleton Gillespie, @briana-v.bsky.social, and @borhane.bsky.social will examine red-teaming’s place in the evolving landscape of genAI evaluation and governance. RSVP and join us online! datasociety.net/events/red-t...
05.02.2025 19:14
👍 20
🔁 8
💬 0
📌 2
highly sensitive database and federal payments system. The Doge team is being led by a group of six ‘young and inexperienced’ engineers aged 19 to 24, one of whom is still in college.” let’s hear more about the objectivity of computer scientists and how they are here to “solve the world’s problems
03.02.2025 20:26
👍 1
🔁 0
💬 0
📌 0
If there was ever a time to focus on the education of engineers, and justify why the system needs a complete overhaul, this would be it: “Workers for Doge, an unofficial government department with no congressionally approved mandate, have gained access to U.S treasury’s
03.02.2025 20:26
👍 0
🔁 0
💬 1
📌 0
“Duolingo but for anxiety”. What does that even mean?? Am I going to learn about anxiety in different languages? Tech bros tech bro-ing hard in their alternate universe where they are genius “disrupters”. And the 420 reference? Very subtle, har har.
02.01.2025 20:16
👍 3
🔁 0
💬 0
📌 0