Designing “Contestability” in Automated HR Decisions: Appeal Mechanisms, Evidence, and Outcomes
Abstract
More and more, automated decision systems are being used in Human Resource Management (HRM) to screen resumes, rank candidates, decide who gets promoted, and predict who is likely to leave the company. Even if these tools make things more efficient, they often make it harder for people to question, fix, or challenge decisions, which might be unfair, unclear, and legally accountable. This study examines contestability in automated HR decisions by developing and assessing appeal systems that facilitate substantive dispute, evidence submission, and review. Contestability is defined as a socio-technical capability comprising three dimensions: informational access to decision factors, evidentiary access for submitting corrections or contextual information, and revision authority that delineates who reviews appeals and the standards under which decisions may be revised. Phase 1 uses a mixed-methods designscience approach to find out what kinds of evidence are admissible, what kinds of operational limits there are, and what kinds of contestation needs there are by interviewing applicants, employees, recruiters, and HR compliance officers. Phase 2 analyzes different appeal designs by running controlled trials that look at procedural fairness, trust, privacy concerns, and how well people accept the results. Phase 3 tests good designs in fake recruiting and promotion instances to see how well they fix mistakes, how long it takes to get back to work, and how outcomes change for different groups.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 The Author(s)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator.