In a rapidly evolving world, the field of recruitment is facing a pivotal question: what is the difference between human-conducted recruitment and its automated cousin, recruitment by AI?
Source: Unsplash
72% of recruiters use Microsoft’s LinkedIn Talent Solution and yet its
Vice President, John Jersin, stated “I certainly would not trust any AI system today to make a hiring decision on its own.”
In theory, the automation of talent acquisition can save time, costs, and broaden the hiring pool. Unsuccessful implementation, however, often prevents this from being the case.
The Unintended Bias in Amazon's Recruitment Algorithm Experiment
Careful implementation calls for a bias-free approach, but perception is inherently flawed by our own experiences, which shape our own bias. Amazon, for instance, is no exception to the rule. Canada’s largest online retailer tested out an algorithm to mechanize their screening process. The concept was simple: programmers fed the recruitment algorithm current employee’s resumes to determine their common characteristics, after which new resumes were involved and AI was asked if they were any good. In an unexpected twist, the algorithm deemed being a man to be a factor of success. With only 24% of their senior leaders being women, the algorithm’s reasoning is rather unsurprising. Understandably, the testing was put on hold after such blatant discrimination, and the experiment had failed the bias test.
The incident explains John Jersin’s comment quite clearly. We have not yet implemented recruitment AI without bias. It seems, presumptively, that the problem may have to do with the human tie to AI. Indeed, the first ones to interact with the algorithm’s given data are humans. If a determining factor of success was being a man, it was not the algorithm’s fault, it was a reality. The algorithm then did exactly what it was told: analyze common factors among data and apply these to new resumes. It is the application, not the execution, of human biased data which impaired the experiment.
Can AI Thrive Without Human Influence?
Could it be, therefore, that AI works better without human interaction? A 2018 study explored this idea with bail decisions. Researchers gave an algorithm historical criminal data, rather than judges’ jailing decisions, to determine how likely a criminal would be to re-offend. This led to a 25% reduction in crime rates and in discrimination among jailed inmates, proving that objective fact does overrule human judgment.
If Amazon had a way to give the algorithm unbiased data, perhaps the AI would have made recruiting fairer than ever. Indeed, relying on pure objective fact in the study resulted in a decrease in discrimination.
For tech companies, which are notoriously known for their low percentage of female employees, recruitment AI might be the solution to eradicating all hiring discrimination. But in order to achieve this, we need objective facts.
Applying this to recruitment, each company would have to survey what makes an employee competent, according to their standards, and give this hopefully unbiased data to a new recruitment screening algorithm. Amazon has been working on a new and improved version of their 2018 mistake, to which they have yet to find a solution. Indeed, it seems finding unbiased data that truly reflects a company's standards for excellence is what stops many from implementing it.
To Conclude
Despite all of this, it is essential to remember that AI’s power emerges when it complements, rather than replaces, human judgment. In the end, the potential of AI in recruitment is vast, offering fairness, efficiency, and inclusivity. However, its successful implementation depends on our ability to extract and deliver objective facts. With careful consideration, ongoing refinement, and a commitment to ethical practices, AI can indeed be a valuable ally in building diverse and dynamic teams for the future.
Comments