INSIGHTS INTELLIGENCE AT WORK

Blog

Companies’ Secret Weapon for Eliminating AI Bias: Good Data Mining

Tuesday, April 23, 2019

Artificial intelligence (AI) is transforming the way companies hire, making the process much more proactive, efficient, and effective. The algorithms can screen and short list resumes for ideal candidates based on pre-determined hard and soft skill criteria much faster than a recruiter, and even pick up on body language and vocal cues during the interview process that indicate whether the candidate is fit for leadership positions.

Even with these benefits, the technology has recently come under fire for showcasing bias – and not just in the workforce management industry. Facial recognition technology made headlines recently for not being racially inclusive. Software used in the criminal justice industry to forecast the likelihood that a criminal will reoffend reportedly predicted that African American defendants pose a higher risk of recidivism than they actually do. Other reports have found that online advertising algorithms showed high-income jobs to men more often than to women.

By now, most companies have policies in place to avoid discriminatory employment practices. What we as an industry need to understand next, especially as technology is increasingly integrated into our hiring workflows, is that AI performance is dictated by the information put into the system. When AI algorithms are trained on subpar data that contains implicit racial, gender, or ideological biases, the decisions made based on that data will be discriminatory as well.

For example, given machine learning and AI are built to look at patterns, if an organization has historically, even if unintentionally, elevated men to leadership roles over women, machine learning and AI will pick up on this correlation within that company’s hiring data and factor it into the predictions and recommendations the technology makes.

Tackling AI bias: Where to begin

The key to tackling AI bias is to be aware of any prejudice that might exist within the data sets used in AI/machine learning models and take steps to mine that bias out of the data altogether. Data sets should:

  • Not include qualifiers like gender, race or other factors that aren’t pertinent to the hiring decision
  • Be careful of external demographic data sources, for example, filtering out zip codes that have high rates of failing criminal background checks
  • Take into consideration that different countries use different screening criteria and adjust algorithms accordingly

An effective approach to data mining is far from a “set it and forget it” method. Instead, teams must constantly fine-tune and test algorithms to consistently make them better and reap the value the technology was intended to create.

Process efficiencies are arguably one of the biggest benefits. AI-enabled technology, for example, can automatically convert CVs into a searchable candidate record that can then leverage semantic search technology to more effectively match a candidate’s skillset and experience level with the requirements of an open position. Not only can candidate matching include both fact-based requirements such as years of experience and education, but it can also include softer characteristics, such as cultural fit, which makes it a lot easier for hiring managers to focus on the most promising candidates that meet job expectations and cut down on screening time.

Once instances of bias are identified and addressed, AI capabilities not only allow teams to make good hiring decisions faster, but also enable human resource managers to focus on more creative, strategic initiatives such as employee retention, which leads to a stronger bottom line.

No hiring program is immune to bias. Despite being under scrutiny, AI can be an amazing tool for weeding out bias from the hiring process altogether — it just needs the right data. The better the information, the more organizations can reduce hiring bias created by humans and identify the best candidates for the job.

To learn more about the impact of AI in managing today’s total workforce, check out our recent webinar.

 

Stan Limerick is Senior Vice President of Enterprise Architecture and Technology Strategy for Workforce Logiq. He is instrumental in driving the planning and management of the enterprise-wide technology infrastructure strategy, architecture, standards and transformation for the organization, globally.  Stan has almost 40 years of IT experience. He has worked with and led teams across multiple industries, including technology, financial services, education, and automotive and has held executive leadership positions in enterprise architecture, strategic planning, software development, IT infrastructure, security, data warehousing and business intelligence.

More Posts

View all Posts

ZEROCHAOS IS NOW

Welcome!