The Flawed Inception of A.I. in Hiring: Exploring The Implications of Human Design

You may think that a computer would be coldly indifferent to the biases that we humans are guilty of. But you would be wrong. As AI becomes more commonplace in the human resources department, we see that prejudice exists within algorithms.

Many companies have begun to use AI (artificial intelligence) in their hiring and recruitment process. AI uses algorithms, a programmed set of rules to be followed, to select candidates for a company. The algorithm sets parameters to ensure that each candidate has the desired work experience, personality traits, and amount of education to match the position they are applying for. Using AI to screen potential applicants has benefits, including making the process of reviewing resumes faster and providing better quality candidates.

It is easy to think that subjective biases like racism, sexism, and ageism would be removed by having AI in charge, but that does not necessarily seem to be the case. Many businesses taking steps to improve the diversity of their workplaces could believe that, in turning to AI to make hiring decisions, they can achieve complete objectivity. Instead of avoiding discriminating biases, AI technology can sometimes learn the toxic beliefs they are trying to bypass.

In one study, researchers used a neural network called CLIP to move blocks, depicting people’s faces on them, with a robotic arm. The algorithm was programmed to provide information to the robot to help it identify a person’s sex and racial background. With this information, the robot could follow commands such as “Put the black man in the brown box.” The robot was also asked to complete less clear tasks; in these situations, it would have to come to its own conclusions. Unfortunately, in these situations, the robot showed tendencies of racism and sexism. For example, when asked to select the “criminal block,” the robot selected the black male 10% more often. When asked to choose the “doctor block,” the robot more often selected male blocks rather than female ones.

In 2017 Amazon discontinued its use of a resume reviewer after discovering that it preferred male candidates. Since the data submitted by Amazon to train the program consisted mainly of men’s resumes, the program taught itself that female candidates were not as desirable. Although Amazon attempted to fix this problem, it realized there were no promises that the program wouldn’t learn to discriminate against candidates in a new way. Amazon decided it could use the tool to complete elementary tasks, such as removing duplicate applicants from its database. Still, it could not depend on AI to select a successful candidate.

The computer learning of AI can be a significant problem. Often these tools are programmed with data from a company’s top employees. If the provided data teaching the AI tools what a successful candidate should look like is not diverse enough, it could teach itself that whatever the majority of the employees are is what the company is looking to hire. For example, if the percentage of black women in the workplace is relatively small, the tool could decide that black women should be scored lower.

There are many more ways AI can discriminate against applicants as well. Companies that use technology to analyze speech could overlook someone with a speech impairment. Tools that analyze facial movements may score individuals from other cultures lower, as people from different backgrounds do not perceive facial expressions similarly. Whereas Western culture relies on the eyebrows and mouth to convey emotions, some cultures depend on eye expression.

As humans are the ones programming the data that the AI technology draws upon to come to its flawed conclusions, it is clear that this technology may be holding a mirror to the more significant problem in our society – the long-held prejudices we have not evolved past. Until we get to a point where we can reliably remove the biases within AI tools, perhaps it is best not to use them.