Balancing Fairness and Privacy in AI-Aided Recruitment

Extra efficiency is little comfort if the results are worse than before.

An analysis called Fairness, AI & recruitment from last month by researchers at the eLaw Center for Law and Digital Technologies, Leiden University, the Netherlands, addresses an important problem. How can a company use artificial intelligence to aid in recruitment and hiring without crossing lines in privacy and social discrimination/?

AI and recruitment aren’t a sudden pairing. For years, HR departments have increasingly used computers to aid in screening candidates for potential jobs. At a simpler level, software pores through stacks of resumes, scanning for the desired combination of buzzwords to determine who might have the right combination of backgrounds and knowledge.

That starting point likely counts as a lower form of AI. There have been more sophisticated attempts, like the Amazon.com internal project to apply machine learning. In 2018, the company scrapped the project because it had become a disaster. Amazon has been trying to develop a system since 2014 to mechanize the search for talent and in the process eliminate gender bias in hiring.

“Everyone wanted this holy grail,” a source told Reuters. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

They assumed that applying machine learning to a 10-year collection of resumes and hiring decisions would let the software determine how to replicate the decision process. That turned out to be a stumbling block because the software succeeded. It reproduced the inclination to hire mostly men for technical positions, applying logic like reportedly screening out for the use of the term “women’s” in a resume and downgrading graduates of all women’s colleges. On top of that, the software would often recommend unqualified people for jobs because of data problems.

After years of development and expense, executives gave up and closed the project.

That’s a clear example of a social bias that can creep into hiring even as companies want technology to help improve the efficiency of “identifying, attracting, screening, evaluating, interviewing, and managing job applicants.” However, there are limitations to what software can do. In the case of Amazon, it faithfully learned what decisions humans had made before.

This is a central problem in at least some types of AI. If you want to base actions on history, you’ll get exactly what people decided to do in the past, with something like goals in using technology can be another source of frustration.

If you want to achieve an abstract concept like fairness in hiring, how will you define being fair? “In this respect, we claim that what qualifies as fair in AI applications in the hiring process requires more precise delineation to ensure legal certainty concerning the roles and responsibilities of the ecosystem surrounding the creation of these tools and their further application in these processes, and the guarantee of the user rights in an increasingly automated workplace,” the researchers said. Job applicants want a procedural definition while HR practitioners look more to filling a vacancy, not trying the applications in a particular way.

The researchers also noted that the process can retail and use data in ways that are incompatible with data protection laws.