Screenshot of an academic paper titled "The Algorithmic Gaze of Image Quality Assessment: An Audit and Trace Ethnography of the LAION-Aesthetics Predictor" authored by Jordan Taylor, William Agnew, Maarten Sap, Sarah E. Fox, and Haiyi Zhu
π¨π» What is a βhigh-qualityβ or βaestheticβ image according to generative AI developers?
Happy to share that our investigation of the LAION-Aesthetics Predictor has been accepted at #FAccT2026! π§΅ (1/5)
Take a look at a preprint here: arxiv.org/abs/2601.09896
10.03.2026 11:52
π 21
π 5
π¬ 1
π 0
New paper from team @aial.ie! aial.ie/research/gpa...
EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their modelβs training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.
1/
05.03.2026 18:04
π 127
π 74
π¬ 2
π 3
to stay employed at the margins of academia are relentless and unforgiving. Hopefully soon I can find some stability or the peace to quit this field. 2/2
03.03.2026 01:56
π 5
π 0
π¬ 2
π 0
5/5 facct papers accepted, and also 2/2 chi posters (for some reason much more competitive than you might think). I'm pretty burnt out though, I haven't gotten a single phone call in two years of being on the job market, and the pressures to publish lots while also piecing together small grants 1/2
03.03.2026 01:56
π 14
π 1
π¬ 1
π 2
the pipeline of technology from the war on terror to surveilling and hurting protestors in the US should make it crystal clear that its only a matter of time until an AI model is deciding whether to shoot/arrest/pepper spray/etc you. 2/2
02.03.2026 12:07
π 2
π 0
π¬ 0
π 0
The splitscreen of AI companies capitulating to the department of war and the US starting a war beacuse it can is surreal but unsuprising. Even if you're lucky enough to not be in a country where AI is being deployed for war now, 1/2
02.03.2026 12:07
π 3
π 0
π¬ 1
π 0
and that, my fellow academics, is one reason not to hire war criminals: so they can't show up years post (their) war arguing in favor of the scaffolding behind their own atrocities with the imprimatur and status of your university
23.02.2026 23:49
π 2475
π 494
π¬ 46
π 6
Not yet but keep an eye on chi posters!
20.02.2026 16:09
π 1
π 0
π¬ 1
π 0
I've got a satirical paper on ethically aligning an AI used to kill people in the works but I'm worried I'm gonna get scooped by (mis)Anthropic
20.02.2026 12:09
π 2
π 0
π¬ 1
π 0
My current tell for AI written content is self-importance, excessive formatting, and being a yapper.
19.02.2026 12:03
π 2
π 0
π¬ 0
π 0
CHI'26 Workshop on Developing Standards and Documentation For LLM Use as Simulated Research Participants
Workshop Motivation
We have extended the submission deadline to February 20th! Authors will be invited to collaborate on a position paper on standards for LLM in UX research use, documentation, and validation. sites.google.com/andrew.cmu.e...
11.02.2026 23:58
π 0
π 0
π¬ 0
π 0
The Workshop on Developing Standards and Documentation For LLM Use in HCI Human Subjects Research aims to bring the HCI community together to develop standards, guidance, and documentation for the use of large language models (LLMs) as simulated research authors. 1/2
11.02.2026 23:58
π 2
π 1
π¬ 1
π 0
U.K. might lose a prime minister because a guy who worked for him knew another guy who hung out with Epstein. Meanwhile the U.S. opposition party is telling our President, who was Epstein's best friend, that his secret police should get better training so their public street murders look less messy.
09.02.2026 16:07
π 32624
π 9752
π¬ 528
π 350
HB 2599 (AI in therapy) oral and written testimony
Oral testimony
Chair Bronoske, Ranking Member Schmick, members of the committee,
I'm Jon Pincus of Bellevue. I run the Nexus of Privacy newsletter, served on the state Automated Decision Systems Wor...
I testified live on HB 2599, and so did @willie-agnew.bsky.social
I also sent in extensive written testimony -- thanks @wolvendamien.bsky.social @histoftech.bsky.social @emilymbender.bsky.social @anthropunk.bsky.social for all the feedback on this!
privacy.thenexus.today/hb-2599-hw/
03.02.2026 19:52
π 11
π 7
π¬ 4
π 0
This bill is similar to legislation in Nevada and Illinois, and with a few tweaks would provide a powerful tool to shield Washingtonians from harm! 2/2
29.01.2026 12:02
π 1
π 0
π¬ 0
π 0
House Health Care & Wellness - TVW
Public Hearing:
Honored to have remotely testified at the WA House Committee on Health Care & Wellness hearing in favor of HB2599, which would place thoughtful restrictions on AI use in therapy and restrict AI from providing therapy. tvw.org/video/house-... 1/2
29.01.2026 12:02
π 5
π 0
π¬ 1
π 0
Something I do for fun is "inspect source" on AI startup websites and I often find 2000+ lines of code for a site that is 2 pages of text, some hyper links, and a couple logo placements. The website appears to work, but good luck understanding, modifying, or verifying anything about that mess.
29.01.2026 00:09
π 4
π 1
π¬ 0
π 0
3 more days until the submission deadline for our AI for Peace workshop @ ICLR 2026.
Check the details at the link below.
Looking forward to receiving your submissions! Letβs make it together a great workshop and a place for meaningful discussion on this rarely touched but very important topic!
27.01.2026 07:04
π 12
π 5
π¬ 0
π 0
CHI W
Workshop Motivation
We invite authors to submit perspectives, extended abstracts, and position papers between one and four pages (excluding references) on LLM use in UX research. Authors will be invited to collaborate on a position paper on standards for LLM in UX research. sites.google.com/andrew.cmu.e... 2/2
27.01.2026 01:55
π 0
π 1
π¬ 0
π 0
CHI W
Workshop Motivation
The Workshop on Developing Standards and Documentation For LLM Use in HCI Human Subjects Research aims to bring the HCI community together to develop standards, guidance, and documentation for the use of large language models (LLMs) as simulated research authors. 1/2
27.01.2026 01:55
π 5
π 0
π¬ 1
π 0
You still have 8 days to submit your work at our AI For Peace workshop @ ICLR 2026! π’π’π’
22.01.2026 09:00
π 6
π 2
π¬ 0
π 1
I've really been inspired by grassroots sousveillance of ICE. How can computer scientists support this more? There aren't often chances to use what our field is inherently so good at for libratory purposes.
20.01.2026 12:02
π 2
π 1
π¬ 1
π 0
I worry what its like to have experienced addiction, delusions, or intense relationships with chatbots, successfully get out of it, then work somewhere that forces you to use them (or just use google to search). Like trying to be sober with an employer thats gone all in on the ballmer curve.
18.01.2026 12:05
π 4
π 0
π¬ 2
π 0
About the PhD:
Audits and evaluation of AI systems β and the broader context that AI systems operate in β have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.
This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:
What does it mean to represent βground truthβ in proxies, synthetic data, or computational simulation?
How do we reliably measure abstract and complex phenomena?
What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.
The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.
are you displeased with todayβs AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me
apply here: aial.ie/hiring/phd-a...
pls repost
15.01.2026 11:55
π 190
π 140
π¬ 6
π 12
If a reviewer claims that a paper isn't novel, they should cite the paper(s) they're saying are similar, and ACs should push back on them if they don't do this
15.01.2026 12:02
π 5
π 0
π¬ 0
π 0
Normally after working right through the holidays and pulling a really late night for facct I'd take a day off, but it's right back to job and grant apps. Postdoc life is a little rough in some ways
14.01.2026 19:57
π 3
π 0
π¬ 0
π 0
Excited to have submitted 5 facct papers last night! They covered everything from technical audits of different AI systems to ethnographies to policy papers to critiques of power and regulation. Looking forward to sharing them in coming months!
14.01.2026 19:27
π 4
π 0
π¬ 1
π 0
The ability of chatbots to become syncophantic best friends/co-create delusional spirals with a sizable chunk of the population also makes me worried for a future when we can better control these bots and these abilities are used for propaganda
13.01.2026 12:02
π 2
π 0
π¬ 0
π 0
I made a functioning urinal this summer and it's gotten me so many dates. The boys love public infrastructure
12.01.2026 17:16
π 0
π 0
π¬ 0
π 0