@mariamurphy.bsky.social and I have written a short report on protecting the ‘Privacy’ in Privacy-Enhancing Technologies (PETs)
www.iccl.ie/news/new-icc...
@iccl.bsky.social @maynoothuniversity.ie
@mariamurphy.bsky.social and I have written a short report on protecting the ‘Privacy’ in Privacy-Enhancing Technologies (PETs)
www.iccl.ie/news/new-icc...
@iccl.bsky.social @maynoothuniversity.ie
Thanks to the support of @sarahaapalainen.bsky.social, Delfine Gaillard and Ayça Dibekoğlu from @coe.int and all the Equality bodies involved in the project. A special thanks to Louise Hooper, Nele Roekens and Milla Vidina for feedback that improved this report.
Last month the @coe.int published a report written by @soizicpenicaud.bsky.social and me for equality bodies and other national human rights structures in Europe. It provides policy guidelines on AI and algorithm-driven discrimination.
edoc.coe.int/en/artificia...
@shamimmalekmian.bsky.social @ihrec.bsky.social
Finally, dug my own live example (from 12 June 2025) out of the screenshot pile. Unfortunately I didn't capture the question - it would've been something about when exactly was I supposed to cancel my TP status when switching over to a work permit.
I would think of this as "algorithm bias", which encapsulates various algorithmic elements including the training process that contributes to biased algorithmic systems.
A screenshot from a report I did for EDPB (data protection regulators in the EU) www.edpb.europa.eu/our-work-too...
I think it's also important to recognize that LLM "summaries" are not epistemically grounded in the ideas of the source material. When they're correct, it's more because *other* writing (in the training corpus) contains similar language. That makes it especially bad when applied to novel results.
Ethical AI is an oxymoron, like automated science
@ageaction.bsky.social @disabilityfed.bsky.social @claredotc0m.bsky.social @ocoireland.bsky.social
6. "Additional funding and resources to nine agencies responsibility for protecting human rights under the EU AI Act."
@ihrec.bsky.social
www.iccl.ie/news/ireland...
5. "All public bodies and semi-state entities using AI in public services must publish annual evidence-based reports detailing benefits, disadvantages, and any inequalities identified. These reports should be made publicly accessible to ensure transparency and accountability."
@abeba.bsky.social
3. "Developing a national AI risk register within the national AI office to identify and monitor systemic risks across sectors."
4. "Introducing mandatory algorithmic impact assessments for high-risk AI systems in public services."
@soizicpenicaud.bsky.social
www.iccl.ie/press-releas...
2. Involvement of people affected by AI: "establishing a Citizens’ Assembly on Artificial Intelligence Digitalisation and Technology to facilitate inclusive public dialogue and democratic input on AI policy and ethics."
www.iccl.ie/news/submiss...
... We should treat it [#AIAct] as a minimum baseline for national AI regulation, not a maximum standard."
@iccl.bsky.social
Ireland's Joint Committee on Artificial Intelligence published its interim report yesterday. The Committee makes a number of recommendations, which include:
1. "Ireland must not shy away from the EU #AI Act or try to dilute it...
www.oireachtas.ie/en/press-cen...
12 December 2025 Thoughtfully Shaping Our Digital Future To the parties forming the government in the Senate and House of Representatives, as well as the outgoing administration, We are writing to you in recognition of your crucial responsibility for shaping current and future AI policy, overseeing digitization, and upholding public values. We are a coalition of scientists, experts, and representatives of civil society organizations. We believe it is essential to address these matters together. This letter has two objectives: a) Provide context for current plans, including the AI Delta Plan, the AIC4NL position paper, and the Invest-NL AI Deep Dive. These investment proposals often rely on assumptions that lack scientific evidence and do not fully reflect public values. b) To offer a constructive, well-substantiated alternative approach to digital futures based on people, nature, and democracy. We believe a collaborative process should guide decisions on the needs, scope, and nature of investments by bringing together scientists, civil society organizations, and stakeholders.
📝 OPEN LETTER 📝
Are you based in NL 🇳🇱 ? Do you also want government to thoughtfully shape our digital future, with care for people and nature? Please share and sign 🖊️ this letter addressed to parties forming the new Dutch government and outgoing administration.
📝 openletter.earth/zorgvuldig-a...
@robin.berjon.com I think you will enjoy this paper if you have not read it yet repository.ubn.ru.nl/bitstream/ha...
Finally, connectionism, and cognitive science generally, can rid ourselves of the hidden conflicts of interest inherent in taking industry funding to build and use such models (Forbes & Guest, 2025; Gerdes, 2022; Liesenfeld & Dingemanse, 2024; Liesenfeld et al., 2023). This is possible by requesting that we and our fellow practitioners disclose such conflicts during and at the point of publication. Relatedly, we need to acknowledge that such relationships to industry effectively bend our metatheoretical positions towards un-, or minimally a-, scientific reasoning that we are under obligation to keep in check if not at bay (also see Andrews et al., 2024; Bender et al., 2021; Birhane & Guest, 2021; Forbes & Guest, 2025; Gerdes, 2022; Guest, 2024; Spanton & Guest, 2022). Ultimately, it is up to us, theoreticians and modelers alike, to decide on the fate of our own fields and on the basis on which we create, understand, and reason about and over our models. Connectionism can be perhaps be redeemed, but it requires us to: sacrifice superficial understanding of what role models play and what they constitute; halt the “anything goes” antiscientific dictum of industry funding; and become aware of what follows from our reasoning when we engage mechanistic and/or functional explanations; and if done carelessly, we risk being incoherent or self-undermining. Snatching defeat from the jaws of victory seems to be connectionists’ speciality, however the only difference may be that, this time round the stakes are higher both for science specifically and society at large.
Two, the conflicts of interest for modern connectionism practitioners and supporters need to start to be disclosed and dismantled: "halt the “anything goes” antiscientific dictum of industry funding; and become aware of what follows from our reasoning"
For more see: doi.org/10.1037/rev0...
11/n
Very disappointing that ACM using AI summaries that are highly likely to lead readers astray.
We can expect more research papers misrepresenting other papers. We can thank AI summaries for that.
Shouldn't it be an opt-in? I don't think it should be the responsibility of authors to opt-out of an AI tool that has been imposed on them and their work.
"One of the front lines in the Algorithm Wars is Ireland. Meta, TikTok, YouTube, X and Snapchat all have international bases here. Meta alone ran €85 billion worth of revenue through its Dublin HQ last year... Meta’s corporation tax payment of €367 million."
"The EU’s rather timid and belated attempts to limit the damage inflicted by the social media oligarchs are not the only reason for Trumpworld’s determination to take it down. Climate change (and the threat to America’s vast fossil fuel industries from the EU’s green agenda) has to be factored in."
"For opium, think algorithms designed to get children hooked on poisonous images delivered straight into their developing brains. And for China then, think the EU now."
From the excellent @fotoole.bsky.social
"Now, as then, the world’s largest superpower has fused its interests with those of pusher cartels – for the opium lords of 1839, think the tech bros of 2025."
The bottom of the article has some of the errors. As @wendylyon.bsky.social who identified these errors emphasises, these are only a selection of errors.
bsky.app/profile/wend...
"The Minister states that “the chatbot was tested extensively...checking responses to “55 questions out of which 53 were deemed successful"
Wendy Lyon, an immigration & human rights solicitor at Abbey law, found that many of these 53 “successful” responses were wrong, unhelpful & misleading."
The excellent @shamimmalekmian.bsky.social @dublininquirer.com
whose earlier piece triggered our investigation has also written about this
www.dublininquirer.com/was-the-depa...
And the department is not immune to the #AIHype.
One of the documents received through FOI requests makes this claim:
“Copilot doesn’t just connect ChatGPT with Microsoft 365,” but it “turn[s] your words into the most powerful productivity tool on the planet.”
Our investigation into Irish Department of Justice use of chatbots. The department hides behind disclaimers while deploying misleading chatbots.
@iccl.bsky.social @abeba.bsky.social @johnnyryan.bsky.social
www.iccl.ie/news/irish-d...
what a great honour to receive the 25th Irish Tatler Women of the Year Award in the category of Innovation. the optimist in me thinks this marks a turn for recognition of the importance of critical work and accountability in AI
www.businesspost.ie/life-luxury/...