FINDHR: an interdisciplinary project to prevent, detect, and mitigate discrimination in AI

The Max Planck Institute for Security and Privacy (MPI-SP) is part of the Horizon Europe project FINDHR, which aims to develop fair algorithms in personnel selection processes

July 21, 2022

The EU-funded project, FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation), will start in November 2022 and be one of the greatest opportunities for EU R&D on fair recruitment to flourish, leading to systems that excel at finding the best candidates for a vacancy, while pioneering a new context-aware research approach that engages with the Human Resources (HR) industry and with those who suffer discrimination. FINDHR will provide the technical, legal, and ethical tools required by an inclusive and diverse Europe that is moving towards remote jobs and global work, and where recruitment is increasingly done online and made more complex by an increase of applicants from intersecting minority backgrounds.

Algorithmic hiring is on the rise and rapidly becoming necessary in some sectors. Corporate job postings that used to attract about 120 applicants in 2010, now attract over 250.1 Artificial Intelligence technologies promise to deal with hundreds or thousands of applicants at high speeds.2  Moreover, their uptake in European HR teams and Public Employment Services (PES) is growing faster than the global average.3  European tools are highly innovative, and include tools that instantly select and rank candidates based on their resumes and application materials, or process candidates using online tests or games. Discriminatory biases have been documented across almost all applied domains of AI,4 and it is increasingly acknowledged that algorithmic hiring systems do this too, reproducing and amplifying pre-existing discriminatory entry barriers into the labor market.5 The FINDHR project is designed to create practical integrated solutions to tackle this issue.

Through a context-sensitive, interdisciplinary approach, FINDHR will develop new technologies to measure discrimination risks, to create fairness-aware rankings and interventions, and to provide multi-stakeholder actionable interpretability. It will also produce new technical guidance to perform impact assessment and algorithmic auditing, a protocol for equality monitoring, and a guide for fairness-aware AI software development. The project will also design and deliver specialized skills training for developers and auditors of AI systems.

The project is grounded in EU regulation and policy. As tackling discrimination risks in AI requires processing sensitive data, it will perform a targeted legal analysis of tensions between data protection regulation (including the GDPR) and anti-discrimination regulation in Europe. It will also engage with underrepresented groups through multiple mechanisms including consultation with experts and participatory action research. All outputs will be released as open access publications, open source software, open datasets, and open courseware.

The project coordinator, Carlos Castillo, said “Algorithms are increasingly intersecting with important aspects of our lives and shaping our social interactions and careers. Without the necessary understanding and oversight, there are critical risks that need to be better understood. I am excited to work with academic and industry researchers and representatives from advocacy groups in this very challenging, high-risk/high-reward research project.”

FINDHR is a 3-year (2022-2025), €3,3-million-funded Research and Innovation Action supported by the European Union’s Horizon Europe Programme (grant agreement No 101070212), under the call HORIZON-CL4-2021-HUMAN-01-24 (“Tackling gender, race and other biases in AI”).

FINDHR consortium

The consortium includes leaders in algorithmic fairness and explainability research (UPF, UVA, UNIPI, MPI-SP), pioneers in the auditing of digital services (AW, ETICAS), and two industry partners that are leaders in their respective markets (ADE, RAND), complemented by experts in technology regulation (RU) and cross-cultural digital ethics (EUR), as well as worker representatives (ETUC) and two NGOs dedicated to fighting discrimination against women (WIDE+) and vulnerable populations (PRAK).

  1. UNIVERSITAT POMPEU FABRA (UPF), Spain (Project Coordinator)
  2. UNIVERSITEIT VAN AMSTERDAM (UvA), Netherlands
  3. UNIVERSITA DI PISA (UNIPI), Italy
  4. MAX PLANCK INSTITUTE FOR SECURITY AND PRIVACY (MPI-SP), Germany
  5. RADBOUD UNIVERSITEIT (RU), Netherlands
  6. ERASMUS UNIVERSITEIT ROTTERDAM (EUR), Netherlands
  7. WOMEN IN DEVELOPMENT EUROPE+ (WIDE+), Belgium
  8. PRAKSIS ASSOCIATION (PRAK), Greece
  9. CONFEDERATION EUROPEENNE DES SYNDICATS ADF (ETUC), Belgium
  10. ETICAS RESEARCH AND CONSULTING SL (ETICAS), Spain
  11. RANDSTAD NEDERLAND BV (RAND), Netherlands
  12. ADEVINTA SPAIN, SLU (ADE), Spain
  13. ALGORITHMWATCH SWITZERLAND (AW), Switzerland

 

1 Fuller JB, Raman M, Sage-Gavin E, Hines K (2021) Hidden workers. Accenture and Harvard Business School
2 Heilweil R (2019) Job recruiters are using AI in hiring. Vox
3 High-Level Expert Group on the Impact of the Digital Transformation on EU Labour Markets (2019). Final report.
4 Feuerriegel S, Dolata M, Schwabe G (2020) Fair AI. Business and Information Systems Engineering 62
5 Michael L, Waterhouse-Bradley B (2021) Artificial intelligence in HR. European Network Against Racism


UPF

Go to Editor View