5/5 The stakes: AI influences fundamental life outcomes. Without robust, transparent audits, we risk perpetuating harms and undermining trust.
For governance folks: What's your biggest auditing challengeβtechnical gaps, regulatory clarity, or stakeholder engagement?
doi.org/10.1177/205395
11.03.2026 19:30
π 0
π 0
π¬ 0
π 0
4/5 Auditors face regulatory ambiguity, data governance gaps, and interdisciplinary friction between tech, legal, and leadership teams.
Yet they're ecosystem buildersβtranslating vague laws into actionable frameworks and pushing organizations toward better AI governance.
11.03.2026 19:30
π 0
π 0
π¬ 1
π 0
3/5 Key finding: Most audits focus narrowly on technical metrics.
Broader impacts on vulnerable communities? Often sidelined.
Public reporting of audit results? Almost nonexistent.
Transparency and stakeholder engagement remain major gaps. π
11.03.2026 19:30
π 0
π 0
π¬ 1
π 0
2/5 What's driving AI auditing growth?
πΉ Regulation (EU AI Act, NIST frameworks)
πΉ Reputation management (avoiding biased AI headlines)
πΉ Competitive strategy (trustworthy AI advantage)
The ecosystem spans internal teams, Big Four firms, specialized startups. @purduepolsci.bsky.social
11.03.2026 19:30
π 2
π 0
π¬ 1
π 0
1/5 We interviewed 34 AI ethics auditors across 23 organizations in 7 countries. Published in Big Data & Society. @bigdatasociety.bsky.social
The field borrows from financial auditing: planning, validating, analyzing risks, reporting. But it's still figuring out what success looks like. π
11.03.2026 19:30
π 0
π 0
π¬ 1
π 0
AI systems decide who gets hired, who gets loans, who receives healthcare. But who's auditing the AI? π€
Our new study explores the emerging field of AI ethics auditingβthe people and processes trying to make AI accountable. @grailcenter.bsky.social
doi.org/10.1177/205... π§΅
11.03.2026 19:30
π 1
π 2
π¬ 1
π 1
Published in Hastings Center Report. @purduepolsci.bsky.social @GRAILcenter.bsky.social
onlinelibrary.wiley.com/doi/abs/10....
With Daniel Susser, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, & team
03.03.2026 14:58
π 0
π 0
π¬ 0
π 0
Synthetic data should complement real-world data, not replace it. The choice ahead: Will we use this technology to bridge healthcare gaps or deepen inequities?
For governance teams & researchers working on AI in healthcareβcurious what you're seeing?
#SyntheticData #AIinHealthcare #Bioethics
03.03.2026 14:58
π 0
π 0
π¬ 1
π 0
We argue synthetic data isn't a magic fixβit's a powerful tool that demands robust safeguards π‘οΈ
Key needs:
β’ Standards for accuracy & reliability
β’ Privacy protections
β’ Transparent policies
β’ Continued investment in diverse, real-world datasets
03.03.2026 14:58
π 0
π 0
π¬ 1
π 0
But the risks are real:
β’ Accuracy issues for rare disease algorithms
β’ Potential privacy leaks despite synthetic nature
β’ Bias amplification from flawed source data
β’ Regulatory gaps exploiting "non-identifiable" status
β’ Justice concerns about sidelining real-world diversity
03.03.2026 14:58
π 0
π 0
π¬ 1
π 0
What synthetic data promises:
β’ Privacy protection through artificial datasets
β’ Inclusive modeling of rare diseases & underserved groups
β’ Enhanced AI training capabilities
β’ Scalable research opportunities
The potential is substantial β‘
03.03.2026 14:58
π 1
π 0
π¬ 1
π 0
Enter synthetic data: AI-generated datasets that mimic real-world patterns without containing actual patient information
Sounds perfectβprivate, inclusive, scalable. But our analysis in Hastings Center Report reveals significant ethical complexities π¨
03.03.2026 14:58
π 0
π 0
π¬ 1
π 0
The challenge: Healthcare research is data-rich but insight-poor π
Privacy laws, demographic gaps, and underrepresentation of rare conditions prevent researchers from fully utilizing available EHRs, public datasets, and lab studies
03.03.2026 14:58
π 0
π 0
π¬ 1
π 0
Synthetic data promises to revolutionize healthcare researchβsolving privacy issues, modeling rare diseases, expanding equity. But it's also an ethical minefield that demands careful navigation π§΅
onlinelibrary.wiley.com/doi/abs/10....
03.03.2026 14:58
π 0
π 0
π¬ 1
π 1
#8 For policy practitioners, governance teams, and org leaders: curious what you're seeing in your hiring? Paper below π
@purduepolsci.bsky.social @GRAILcenter.bsky.social
doi.org/10.1109/TTS...
24.02.2026 17:02
π 0
π 0
π¬ 0
π 0
#7 AI ethics and governance aren't "nice-to-haves"βthey're becoming non-negotiable pillars of responsible AI development. As industries adopt AI at scale, these roles will define how society benefits from this technology βοΈ
24.02.2026 17:02
π 0
π 0
π¬ 1
π 0
#6 What's driving alignment? New AI regulations demand compliance. Employers recognize public trust is critical for AI adoption. Universities race to create relevant programs. More than 100K professionals needed annually
24.02.2026 17:02
π 0
π 0
π¬ 1
π 0
#5 Finance and Information industries dominate demand, with AI ethics/governance roles growing fastest there. Highly regulated sectors can't afford ethical lapses as AI adoption scales π¦
24.02.2026 17:02
π 0
π 0
π¬ 1
π 0
#4 Demand is surging π AI ethics roles grew from 35K in 2018 to 109K in 2022. Governance roles hit 96K in 2022. Even as overall AI hiring dipped in 2023, these roles remained stable. Results suggest sustained market need
24.02.2026 17:02
π 0
π 0
π¬ 1
π 0
#3 Key finding: AI ethics β AI governance. Employers seek distinct skills:
πΉ Ethics: Data privacy, bias mitigation, critical thinking
πΉ Governance: Risk management, policy development, leadership
Both require interdisciplinary knowledge
24.02.2026 17:02
π 0
π 0
π¬ 1
π 0
#2 Our study analyzed 4.4M+ AI-related job postings to uncover trends in demand for AI ethics (fairness, transparency) and AI governance (regulatory compliance, risk management) skills. Published in IEEE Transactions on Technology and Society
24.02.2026 17:02
π 0
π 0
π¬ 1
π 0
#1 We're seeing an "AI skills gap"βa shortage of professionals equipped with both technical expertise AND the ability to handle ethical dilemmas and regulatory challenges. AI is transforming industries, but with great power comes great responsibility π
24.02.2026 17:02
π 0
π 0
π¬ 1
π 0
The AI job market is evolving beyond coding. Employers now demand AI ethics and governance skills at unprecedented rates. Our analysis of 4M+ job postings from 2018-2023 reveals what's driving this shift π§΅
doi.org/10.1109/TTS...
24.02.2026 17:02
π 2
π 1
π¬ 1
π 1
6/7 π¬ Next steps: Validation beyond Western university samples, workplace applications, and cross-cultural AI literacy research.
With Arne Bewersdorff and Marie Hornberger. Thanks to Google Research for funding a portion of this work
@purduepolsci.bsky.social @GRAILcenter.bsky.social
17.02.2026 15:04
π 0
π 0
π¬ 1
π 0
5/7 π Why this matters for AI governance:
Scalable assessment tools are essential for evaluating education programs, informing policy decisions, and ensuring citizens can navigate an AI-driven world.
AILIT-S makes systematic evaluation feasible.
17.02.2026 15:04
π 0
π 0
π¬ 1
π 0
4/7 π― Best use cases:
βοΈ Program evaluation
βοΈ Group comparisons
βοΈ Trend analysis
βοΈ Large-scale research
β Avoid for individual diagnostics
The speed enables broader participation and better population-level insights.
17.02.2026 15:04
π 0
π 0
π¬ 1
π 0
3/7 β
Results show AILIT-S delivers:
β’ ~5 minutes completion time (vs 12+ for full version)
β’ 91% congruence with comprehensive assessment
β’ Strong performance for group-level analysis
Trade-off: slightly lower individual reliability (Ξ± = 0.61 vs 0.74)
17.02.2026 15:04
π 0
π 0
π¬ 1
π 0
2/7 π AILIT-S covers 5 core themes:
β’ What is AI?
β’ What can AI do?
β’ How does AI work?
β’ How do people perceive AI?
β’ How should AI be used?
Special emphasis on technical understandingβthe foundation of true AI literacy.
17.02.2026 15:04
π 0
π 0
π¬ 1
π 0
1/7 β‘ The challenge: Existing AI literacy tests take 12+ minutes, making them impractical for large-scale assessment.
Our solution distills a robust 28-item instrument into 10 key questionsβvalidated with 1,465 university students across the US, Germany, and UK.
17.02.2026 15:04
π 0
π 0
π¬ 1
π 0