Projects, resources, and groups that audit public, democratic, or civic AI to ensure equitable and ethical outputs.
Recommended resources:
The ADL AI Index evaluates leading large language models (LLMs) on their ability to detect and counter antisemitic and extremist tropes and narratives.
The AI Incident Database (AIID) tracks instances of ethical
EDIA es un proyecto de la Fundación VÃa Libre cuyo objetivo es involucrar a más personas en la evaluación de tecnologÃas de inteligencia artificial como son los modelos de lenguaje (por ejemplo, ChatG...
This paper argues that automated decision-making in UK public administration lacks adequate scrutiny, and proposes regulatory safeguards through mandatory pre-deployment impact assessments and algorit...
Lighthouse Reports series reveals discrimination in welfare surveillance algorithms
This study examines 51 Freedom of Information requests to reveal how the UK’s Department for Work and Pensions uses opaque data-driven fraud detection systems in welfare, highlighting the limited tran...
AI Evaluation Made Easy
Worker-led audits in the platform economy
An open-source Python library designed to assist developers in calculating fairness metrics and assessing bias in machine learning models.
Soon Ranking Digital Rights will release the Generative AI Accountability Scorecard, evaluating major consumer-facing generative AI services’ respect for the human rights to privacy, non-discriminatio...
Humane Intelligence is a tech nonprofit building a community of practice around algorithmic evaluations.
What is the impact of social media on the representation and voice of migrants and refugees in Europe? What are the challenges and opportunities to avoid their invisibilization and promote a fair repr...
Since 2016, the Laura Robot has analyzed more than 8.6 million visits in 40 clinical and hospital centers in several Brazilian states.
In Spain, the level of risk to which a victim of gender violence is subjected is determined by an algorithm. The system to which it belongs is VioGén which, with more than 3 million risk evaluations,...
The success of Koa Health and Eticas serves as a beacon for the broader industry, emphasizing the importance of ethical considerations when developing mental health solutions.
The report explores Uber, Cabify and Bolt’s compliance with competition, labor and consumer protection laws in Spain.
An In-Depth Audit of Biases in Facial Recognition Technology Impacting Individuals with Disabilities
What is the impact of social media on the representation and voice of migrants and refugees in Europe? What are the challenges and opportunities to avoid their invisibilization and promote a fair repr...
Eticas audited the algorithmic system that predicts homelessness risk in Allegheny County, Pennsylvania, USA.
With the collaboration of Universidad Pompeu Fabra (Barcelona), Eticas carried out an audit of the natural language processing (NLP) system from the Social Services area from the Barcelona City Counci...
ACLU's crowd-sourced public tracker of bias audits of automated employment decision tools (AEDTs) released by employers related to NYC's Local Law 144.
Community-led AI Audits: Making AI Systems Fair & Accountable
The OASI Register compiles information about algorithms with the aim of increasing public awareness and providing the necessary knowledge for an informed public conversation, and of making possible fo...
To examine and analyze the downside risks associated with the ubiquitous advance of AI & Automation, to engage in risk mitigation and ensure the optimal outcome… ForHumanity.
Eticas Foundation's adversarial audit of an AI-powered criminal justice tool, RisCanvi, that has been used in Catalonia since 2009 finds it "does not meet the required standards of reliability and fai...
The MLCommons AI Safety working group is composed of a global consortium of industry leaders, practitioners, researchers, and civil society experts committed to building a harmonized approach to AI sa...
The IAAA is a community of practice that aims to advance and organise the algorithmic auditing profession, promote AI auditing standards, certify best practices and contribute to the emergence of Resp...
AI Forensics is a European non-profit that investigates influential and opaque algorithms. We hold major technology platforms accountable by conducting independent and high-profile technical investiga...
We now bring this decade’s worth of insights and our intellectual capital to scale with our automated audits, and proprietary demographic and geographic database used to ensure the validity (and compl...
Philadelphia should think twice about its risk-assessment algorithm.
Eticas teams up with organizations to identify black box algorithmic vulnerabilities and retrains AI-powered technology with better source data and content.