For what it's worth this is my full quote. I would love to understand from @ukri.org why this call has been restricted to 4 weeks (with no advanced warning) going against it's own EDI commitments. The response so far doesn't explain why only 4 weeks nor why not 8 weeks.
06.03.2026 10:00
π 17
π 11
π¬ 1
π 1
26.02.2026 09:45
π 11342
π 3194
π¬ 62
π 42
The choice of graph with the nice line going up and the headline really are working very hard to make the evidence seem very different to what's written in the paper.
25.02.2026 13:55
π 1
π 0
π¬ 0
π 0
Is there really a 'quiet revival' of religion among Gen Z?
A fierce debate is taking place about whether there really has been a revival in Christianity.
I had no idea YouGov polls aren't random sampling and are basically MTurk for politics.
BBC News - Is there really a 'quiet revival' of religion among Gen Z? - BBC News
www.bbc.co.uk/news/article...
22.02.2026 09:52
π 1
π 0
π¬ 0
π 0
Spoiling a lot of takes but begging a large number of other questions... "Worsening graduate fortunes, it turns out, are a particularly British problem"
www.ft.com/content/649d...
20.02.2026 07:56
π 320
π 146
π¬ 29
π 26
I drew a face on a paper plate. Who could even begin to understand how it perceives the world or what it dreams of?
11.02.2026 04:39
π 1940
π 422
π¬ 40
π 6
So that's two Tucson Consciousness Conference regulars in the Epstein files so far
05.02.2026 12:36
π 3
π 1
π¬ 0
π 0
Stuart Hameroff, anesthesiologist. 16/
bsky.app/profile/psyc...
en.wikipedia.org/wiki/Stuart_...
01.02.2026 05:47
π 120
π 15
π¬ 2
π 3
Academics vying for a spot in Epsteinβs world. There are so many. I feel the need to make a thread, so I donβt keep confusing them. 1/
31.01.2026 21:02
π 2941
π 1445
π¬ 75
π 222
Tesla's own Robotaxi data confirms crash rate 3x worse than humans even with monitor
Teslaβs nascent robotaxi program is off to a rough start. New NHTSA crash data, combined with Teslaβs new disclosure of...
New NHTSA crash data, combined with Teslaβs new disclosure of robotaxi mileage, reveals Teslaβs autonomous vehicles are crashing at a rate much higher than human drivers, and thatβs with a safety monitor in every car.
Tesla has reported 9 crashes involving its robotaxi fleet in Austin, TX.
/1
30.01.2026 17:09
π 2285
π 975
π¬ 111
π 115
*slowly places the UK into the sea*
29.01.2026 14:42
π 2
π 0
π¬ 0
π 0
*slowly places 90% complete MRC draft into the Recycle Bin*
29.01.2026 14:41
π 1
π 0
π¬ 0
π 1
Absolutely phenomenal thread
29.01.2026 06:53
π 30
π 29
π¬ 0
π 0
This government clearly don't really understand the complexity or the danger of the situation. They show no sign of beginning to understand, and every sign of asking the monorail salesman how public transport should work. It's going to be an uphill struggle, but don't leave this to politicians.
28.01.2026 13:26
π 841
π 126
π¬ 29
π 16
Poverty is deepening.
π Our #UKPoverty2026 report was launched this morning.
People in very deep poverty now make up the biggest group of people in poverty, at 6.8 million people.
This is unacceptable for the fifth richest country in the world, and it has consequences.
27.01.2026 07:53
π 189
π 172
π¬ 6
π 31
I think the solution to this worry is to knock all this dumb shit off
26.01.2026 08:37
π 1454
π 123
π¬ 29
π 2
So very sorry, Moose x
22.01.2026 12:00
π 0
π 0
π¬ 0
π 0
The list of most relevant themes we identified is:
1. Rhetoric of inevitability and technological determinism: presenting the adoption and use of
(generative) AI as a fait accompli.
2. Exaggerated narratives: overstating the general capabilities of the technology, or leaving out that
certain seemingly impressive capabilities can only be achieved under very specific experimental
conditions.
3. Spurious comparison to human intelligence or Anthropomorphism: presenting AI as if it thinks or
reason like a human.
4. Ethics and critical washing: presenting AI as being ethically or critically examined but doing so only
superficially and inconsequentially.
5. Wishful thinking and uncertain feasibility: assuming desired outcomes or functionality despite lacking
realistic evidence they can be achieved.
6. GenAI is presented as indispensable: portraying AI as essential even when simpler or non-AI solutions
are sufficient.
7. Unrealistic and ill-defined conditions: formulating requirements for adoption and use that are
functionally impossible or too demanding to be met, psychologically implausible to follow, or set
unclear boundaries for acceptable and unacceptable behaviour, which could easily create
inconsistencies.
8. Resources as propaganda: resources for students, faculty, and other stakeholders are made available
but only for incentivizing different degrees of use of genAI.
9. AI Overkill: Substitution or replacement of tasks for which the technology was not designed; from
tutoring to teaching to research, everything must be now with AI, even if it is not adequate.
Due to space limitations, rather than discussing all of the themes superficially, this essay addresses only the
first four, which, in our view, are the most critical and the most urgently in need of critical scrutiny. T
For all those involved in drafting so-called AI guidelines, but being overwhelmed with nonsense, this is a lifesaver. Great work by Dagmar and Ariel!
Resisting Enchantment and Determinism: How to critically engage with AI university guidelines. doi.org/10.5281/zeno...
18.01.2026 06:47
π 198
π 120
π¬ 3
π 3
the robot also doubling over as if it kicked itself in the balls too makes this ten times funnier
27.12.2025 17:56
π 1433
π 279
π¬ 16
π 7
Scientists now believe that the human brain--a magic goo for worrying--can also be used to do other things
08.12.2025 03:53
π 5402
π 926
π¬ 62
π 52
The βLabourβ Party finds itself arguing ordinary workers, carers and others are βtakersβ.
07.12.2025 09:23
π 575
π 185
π¬ 12
π 5
LinkedIn
This link will take you to a page thatβs not on LinkedIn
π¨ Fully funded PhD scholarship to work with me, Ali Mazaheri, Dhruv Parekh, and Katrien Segaert at the University of Birmingham on new brain-based tools to improve diagnoses of consciousness after severe brain injury.
π¨ Deadline: 9th January 2026
Full details and application form: lnkd.in/ejZkfFbW
03.12.2025 09:56
π 8
π 8
π¬ 0
π 0
Asking informally: does anyone know someone who might be interested in a postdoc focused on understanding changes in memory representations driven by attention using EEG? β‘οΈThanks!
01.12.2025 05:03
π 17
π 26
π¬ 1
π 0
Baby, Sporty, Ginger, Scary, and "what did I come in here for?" x
25.11.2025 10:54
π 92
π 19
π¬ 9
π 0
Oh look, my university's about to lose Β£21 million because Labour want to appease some mythical racists who won't vote for them anyway.
24.11.2025 11:08
π 41
π 17
π¬ 0
π 0
"With dismay we witness our university leadership making soulless choices that hollow out our institutions from within and erode the critical and self-reflective fabric of academia".
[from Guest et al., 'Against the Uncritical Adoption of "AI" Technologies in Academia']
22.09.2025 12:08
π 81
π 27
π¬ 2
π 2
Do you recommend any particular platform / approach for tracking usage of open access materials? Not just download counts but also asking some simple information of the user before they can download (like EEGLAB, Fieldtrip, SPM, etc all do). Any pointers much appreciated!
17.11.2025 16:18
π 1
π 2
π¬ 0
π 0
Do you recommend any particular platform / approach for tracking usage of open access materials? Not just download counts but also asking some simple information of the user before they can download (like EEGLAB, Fieldtrip, SPM, etc all do). Any pointers much appreciated!
17.11.2025 16:18
π 1
π 2
π¬ 0
π 0
The amount of AI generated art in slides at this conference, primarily used by older scientists, is killing me. Scientists please. Donβt use these ai platforms to make your figures or slides. They look bad and I have yet to see them meaningfully improve the message of talks.
31.10.2025 03:09
π 1765
π 357
π¬ 38
π 31