Perceived fairness and justice in job recruiting and hiring are influenced by several factors. Some factors are the consistency of the decision-making process across people and time, timely and informative feedback, propriety of the interview questions, and the extent to which pre-employment tests appear to relate to the job requirements. These factors come together to influence decisions about recruiting and hiring and are being made increasingly with the help of artificial intelligence (AI). In this project, a sociotechnical frame is applied to explore perceptions of fairness and justice of AI-supported talent acquisition algorithms. the investigator will elicit and analyze perceptions of human resources personnel, African American job seekers, and AI software designers. The outcomes will be used to inform the design of bias recognition and mitigation procedures and technologies for both humans and the algorithms being used.
The intellectual merit of this exploratory study is the development of qualitative instruments and metrics that can be used to measure perceptions of algorithmic fairness and justice. The research approach extends a theory of procedural rules for perceived fairness of selection systems by using a three-pronged approach comprising job seekers who are under-represented in the IT industry, human resource professionals who manage the talent acquisition process, and IT professionals who design AI software with fairness as the core value in product design and development. Perceptions using scenarios are examined as well as the actual experiences of jobseekers who are affected by these decisions. This research contributes to an assessment of algorithmic fairness at a time when there is currently little insight into how historically marginalized populations might perceive or be adversely affected by AI systems.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.