Artificial intelligence (AI) in recruiting gives the great promise of eliminating human bias. In order for AI to work well, however, it requires a lot of data, and that data is input by humans or learned by prior biases. For example, when we think about using AI in the screening process, oftentimes it’s already skewed by either looking at successful hires from the past who may fit a specific profile, or by data inputs from humans such as keywords, top tier schools, or previous employers.
When combing through hundreds or thousands of applications, AI can be seen as a helpful timesaver for talent acquisition teams. For example, chatbots, a form of artificial intelligence in messaging apps, sometimes also referred to as conversational AI, can be used to streamline the talent pool by asking applicants key questions to discern if they have the right skills or experience for the job. For corporate roles, this seems worthy, but when it comes to high volume recruitment of hourly employees, how much experience do you really need to flip burgers, steam a latte, or be a cashier? Would AI really help in this case? The short answer is yes. It could ask questions around work availability or to upload a drivers license or certification.
AI can also be tricky because while it can streamline some tasks, it often is hard to teach it how to test for soft skills such as temperament or culture fit. If you’re hiring 10,000 workers at a fast food chain this year, what types of questions could your hiring tool ask to ensure they’re a good culture fit or that they would use good judgment when dealing with a disgruntled customer? Can AI really test for subjective features such as good judgment?
Beyond context and tone, AI can sometimes be the enemy unto itself. AI’s sole purpose is to help you as a recruiter hire the best talent from your talent pool, even if the entire talent pool is made up of subpar candidates. The AI tool doesn’t know everyone is subpar. It just knows it needs to tell you which is the best one of the herd.
AI and machine learning are only as effective as the data sets they use. They can improve in time, but they are also coded by humans who, despite all good intentions, may have implicit and unconscious biases. Without proper training, humans will continue to be humans.
So far, AI has proven fallible in many attempts to mimic recruiting and hiring practices, from Amazon scrapping a secret recruiting tool that showed a bias against women to HireVue eliminating facial expression monitoring from its assessment tool. There are arguments on both sides of the coin as to whether AI is helpful or harmful in sourcing and screening candidates.
There is hope. AI and machine learning are making new advancements every day to try and combat biases, make recruiting less manual, and use less paper. Intelligent text-editing software is already helping companies review job descriptions and identify bias so that minority groups are not discouraged from applying in the first place. Other tools have also been developed to help talent acquisition teams mask age, tone of voice, faces, and rediscover talent in existing talent pools.
Let’s remember, human resources is about humans, and humans will always play an integral role in fair and efficient hiring, from setting up clear success metrics for a role to ensuring the best candidate experiences. This is where the human element comes in and it’s hard to replace a human with a robot completely, but AI can make your job a little easier.
Regardless of whether you use AI in some or all of your hiring, it’s still important to identify where your diversity efforts fall short: during the sourcing process or during the screening process. If you’re interested in reading more about this topic, check out our blog on Diversity Hiring: Sourcing or Screening Problem?