CMR INSIGHTS

 

AI Cannot Respectfully Evaluate Employees

by Philippa Penfold, Jinseok S. Chun, and David De Cremer

AI Cannot Respectfully Evaluate Employees

Image Credit | mast3r

Only by promoting interpersonal respect and ethical standards can AI be deployed in recruitment strategies.
  PDF

The use of artificial intelligence is rapidly expanding across organizations, transforming operations, decision-making, and strategy. As businesses seek to enhance efficiency, innovation, and competitive advantage, the use of AI is no longer optional—it’s imperative. Indeed, a recent Deloitte survey showed that 94% of business leaders agree that AI is critical to be successful over the next five years.1 One area where the adoption of AI is skyrocketing centers on recruitment. Most HR professionals (68%) today believe that AI will positively influence recruitment processes,2 leading to almost three times more use of AI in recruitment in 2024 (14.7%) than the year before (4.9%).3

Related CMR Articles

Kolbjørnsrud, Vegard. “Designing the Intelligent Organization: Six Principles for Human-AI Collaboration,” California Management Review, 66/2 (2024): 44-64.

M. Huang, R. Roland , & V. Maksimovic, “The Feeling Economy: Managing in the Next Generation of Artificial Intelligence (AI),” California Management Review, 61/4 (2019).


AI in recruitment is used to effectively evaluate candidates and make sound decisions about them. However, despite the optimism among HR professionals, concerns exist that using AI in these settings can actually backfire and produce negative consequences for the firms.4 One of the challenges associated with using AI in recruitment is the potential involvement of bias, as AI systems can inadvertently perpetuate existing biases present in the data they are trained on, leading to unfair discrimination against certain candidates based on factors like gender, race, or age.5

In line with this challenge, an Association Workforce Monitor online survey revealed that job seekers in the US showed greater skepticism toward AI in hiring, with nearly half (49%) of survey respondents believing AI recruiting tools are more biased than human recruiters.6 Recent research, however, reveals that although bias is one of the risks embedded in AI-powered recruitment, another factor may even play a bigger role—and that factor is respect.7 Indeed, people feel less respected when evaluated by AI than when they are evaluated by humans.

Respect in the workplace is instrumental to business. Research has shown that perceived respect (i.e. the feeling of being valued for who they are) leads to stronger organizational commitment, better teamwork, higher creativity, and greater psychological health of employees.8 Respect also fundamentally shapes the overall quality of social interactions that occur in the workplace, which trickles down to eventually contribute to firm performance. According to Chun and colleagues, during recruitment processes driven by AI, this essential element is damaged, more so than perceptions of unbiased evaluations.9

The Research

Drawing attention to the difference between humans versus AI evaluating and making decisions about employees, the researchers highlight evidence that shows perceptions of respectful treatment suffer under algorithm-driven evaluations and decisions.

The researchers conducted four experimental studies (a total sample of 995 adult participants), two in the job application process (where they were either evaluated by AI versus human evaluators) and two in the performance review process (conducted by AI or human managers), and findings were consistent across all studies. Results showed that perceived respect was lower for algorithmic evaluations than for human evaluations, and this pattern (a) remained significant while controlling for perceived biases (but not vice versa), (b) was significant even when the effect on perceived biases was not, and © was larger in size than the effect on perceived biases. This is important, because HR professionals may assume their efforts to eliminate bias from AI evaluations will also improve perceived respect. The results of this research suggest that, even in the absence of bias, employees still perceive an algorithmic evaluation as lacking respect and dignity. Interestingly, the effect was not limited to the individualistic sentiments of employees. A lack of perceived respect in response to algorithmic evaluations also occurred when employees observed others being subjected to algorithmic evaluations. These patterns capture how deeply the issue of disrespect and indignity is rooted in algorithmic evaluations.

These results are explained by people’s belief that their true characteristics can only be fully understood by a human, not an algorithm, and that their unique personal attributes are suppressed during algorithmic evaluation. When people are being evaluated, they want to be evaluated as a whole person, not only the digitized bits of information processed by algorithms. At least for now, they believe that only another human can account for their unique personal attributes when evaluating their capability.

Practical Implications

The following actions could increase the perceived respect of employees and candidates subjected to algorithmic evaluations. They can complement policies that improve fairness in management of AI (see De Cremer & De Schutter for more10).

1. Include human evaluators alongside algorithmic evaluations
When designing processes around assessment of employees or candidates, supplement the algorithmic part with human evaluations to ensure that those who receive the evaluations can feel that their unique personal attributes are understood as part of the evaluation process. For example, organizations can offer interactions with human managers in the initial and/or final stages of the evaluation processes.

2. Be clear about the extent algorithmic outputs have in decision making
If AI is being used to evaluate employees or candidates, ensure clear and honest communication regarding the algorithmic evaluations and human evaluations that are included in the process.

3. Ensure managers do not overly rely on AI systems for evaluations
Beyond good process design, apply effective communications and training to ensure managers understand the risk involved in algorithmic evaluations, particularly with respect to respect and dignity.

4. Participate in algorithmic audits and question vendors
Contribute to the audit process of AI evaluation tools and engage with vendors to help them develop AI models that effectively treats unique personal attributes of people.

5. Establish an HR AI ethics standard
Establish a standard against which future HR AI evaluation tools can be assessed, to increase the chance of choosing AI-powered evaluation systems that entail the least damage to the sense of respect.

6. Ensure the HR team understands their own impact of perceived respect and dignity
Empower members of the HR team by helping them appreciate their own role in cultivating respectful and dignifying evaluations, to facilitate their actual engagement during the evaluation processes and strengthen their sense of impact.

References

  1. Deloitte (2022). Fueling the AI transformation: Four key actions powering widespread value from AI, right. Retrieved from: https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-ai-institute-state-of-ai-fifth-edition.pdf

  2. Pantelakis, A. (2024). Top AI in hiring statistics in 2024. Retrieved from: https://resources.workable.com/stories-and-insights/top-ai-in-hiring-statistics

  3. Ahuja, A. (2024). AI adoption in recruitment soars, report says. Retrieved from: https://www.staffingindustry.com/editorial/it-staffing-report/ai-adoption-in-recruitment-soars-report-says#:~:text=AI adoption in recruitment has,usage to increase in 2025

  4. Lavanchy, M., Reichert, P., Narayanan, J. & Savani, K. Applicants’ fairness perceptions of algorithm-driven hiring procedures. J. Bus. Ethics. 188, 125–150 (2023).

  5. Newman, D. T., Fast, N. J. & Harmon, D. J. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organ. Behav. Hum. Decis. Process. 160, 149–167 (2020).

  6. Gordon, C. (2023). AI recruiting tools are rich with data bias and CHROs must wake up. Forbes. Retrieved from: https://www.forbes.com/sites/cindygordon/2023/12/31/ai-recruiting-tools-are-rich-with-data-bias-and-chros-must-wake-up/

  7. Chun, J. S., Cremer, D. D., Oh, E.-J., & Kim, Y. What algorithmic evaluation fails to deliver: respectful treatment and individualized consideration. Sc. Rep., 14, 25996 (2024).

  8. Ambrose, M. L., Schminke, M. & Mayer, D. M. Trickle-down effects of supervisor perceptions of interactional justice: A moderated mediation approach. J. Appl. Psych. 98, 678–689 (2013).

  9. Chun, J. S., Cremer, D. D., Oh, E.-J., & Kim, Y. What algorithmic evaluation fails to deliver: respectful treatment and individualized consideration. Sc. Rep., 14, 25996 (2024).



Philippa Penfold
Philippa Penfold Following 20 years in corporate HR roles focusing on technology, Philippa became heavily involved in the AI ethics debate in 2019. Today, she works in Responsible AI and focuses on the optimal integration of humans and AI in the workplace. Philippa holds an EMBA from Kellogg School of Management-HKUST.
Jinseok S. Chun
Jinseok S. Chun Jinseok Chun, associate professor at SKK Business School, South Korea, explores workplace evaluation processes and their practical implications. He earned his PhD from Columbia University and completed a postdoctoral fellowship at Duke University.
David De Cremer
David De Cremer David De Cremer is the Dean and Professor of Management and Technology at D’Amore-McKim School of Business, Northeastern University, the founder of the Center on AI Technology for Humankind in Singapore and author of “The AI-savvy leader: 9 ways to take back control and make AI work” (Harvard Business Review Press).

Recommended




California Management Review

Berkeley-Haas's Premier Management Journal

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more