How Humans + AI Overcome Hiring Bias [Webinar Recap]

February 27th, 2019
Jon-Mark Sabel
Artificial Intelligence,
Diversity & Inclusion,
Recruiting Teams

Bias in the hiring process has come under the microscope in recent years, prompting thousands of bias-reducing initiatives across the business world. Yet after decades of bias training, resume blinding, and other diversity initiatives, most organizations have barely moved the needle.

Today, we looked at the potential for artificial intelligence to assist humans in making less biased decisions in our webinar: How Humans + AI Overcome Hiring Bias. Can AI work alongside humans to create a more equitable hiring process? That’s what our Chief IO Psychologist, Dr Nathan Mondragon explored.

Missed the Webinar? Watch it here >

Is Human Bias Something to Worry About?

Back in 2003, researchers at the National Bureau of Economic Research sent out more than 5000 resumes to open jobs. All had equivalent skills, education, and experience. The only difference? The name.

They uncovered that resumes with typically “white” names like Emily and Greg were 50% more likely to receive a callback than resumes with typically “black” names like Lakisha and Jamal.

Since then, unconscious bias training skyrocketed.

After years of bias training, we would expect hiring bias to decrease. But when researchers looked at more than 492 studies on unconscious bias training, they found that unconscious bias training had no effect on actual behavior.

Resumes aren’t the only place where bias can seep into the hiring process. In-person interviews and certain types of assessments carry their own built-in biases. Even candidate attractiveness (which Nathan explored in more detail in the webinar) can influence hiring decisions.

In other words, bias can seep into practically every part of the typical hiring process. Where does that leave recruiting leaders, who are charged with finding the best talent, regardless of background?

Artificial Intelligence Enters the Picture

For the past several years, the field of AI has been loudly making strides. From performing beer deliveries to detecting cancer, it’s a technology as potentially disruptive as electricity.

While most publicity around AI has focused on the consumer side of things, large organizations are also leveraging AI to see immense improvements in productivity and efficiency. For example:

  • JPMC performs legal checks of commercial loan agreements with AI, a process which previously took 360,000 hours of legal review

  • UPS uses AI to predict and show drivers the most efficient route to take when making an average of 120 stops per day

  • The Associated Press automates earnings reporting, publishing 4400 AI-written stories per quarter

At its core, AI makes better predictions than any technology previously. For recruiting teams charged with predicting the best talent, the implications are huge.

But despite these benefits, there’s another often-reported aspect of AI: the potential for bias. For example, a recidivism prediction AI disproportionately predicted that African Americans were more likely to reoffend - even when they were not. The AI “baked in” the existing biases of parole boards.

How can we resolve this tension between greater predictive accuracy and the potential for bias?

AI, Without Bias: The Auditing Imperative 

To remove bias from AI in the recruiting space, it’s critical to start with existing science and EEOC principles. The existing science here is Industrial-Organizational (IO) Psychology. IO Psychology has been around for decades, and it is this proven science that underlies validated pre-hire assessments.   

IO Psychology’s scientific backbone provides the methods necessary for auditing AI and removing bias. Like with any pre-hire assessment, it is necessary to monitor the outputs of any AI that makes talent screening decisions.

If something like adverse impact is present, the data which led to that biased result can be uncovered and removed. The AI can then be retrained on the data shown to predict job success without bias.

This is exactly what HireVue does when building its AI-driven assessments. HireVue leverages AI to evaluate recorded video interviews and game-based challenges to deliver insight into a comprehensive range of candidates’ job-related competencies. With that information at their fingertips, evaluators can base their screening and hiring decisions on an objective, bias-free standard.

And since a video-based assessment like this replaces multiple steps in the hiring process (phone screen, resume screen, pre-hire assessment, first round interview), time to hire decreases dramatically.

Compare a typical hiring process:

To one reimagined with an AI-driven assessment which doubles as an interview:

In the full webinar, you’ll get a deep dive into where bias in AI comes from, and how it can be removed for more equitable hiring decisions, as well as the answers to these questions:

  1. I'm a small business. Is this cost effective or more suited for large employers?
  2. How are pre-built assessments different from custom assessments?
  3. Can you speak a bit more to the candidate experience? What is feedback from candidates?
  4. How do you set expectations with candidates so they understand what they’re doing and why?
  5. What is the feedback from recruiters?

Watch the Webinar >