Are Humans Less Biased than AI in the Recruiting Process?

Federico Grinblat

Federico Grinblat

May 29, 2024

Are Humans Less Biased than AI in the Recruiting Process?


In the modern recruiting AI landscape, the debate about whether humans or artificial intelligence (AI) are less biased in the hiring process is gaining momentum. While both humans and AI systems have their own sets of biases, understanding the nature of these biases and how they manifest can help us develop strategies to minimize them. This article explores the biases inherent in human and AI recruiting processes and introduces Brainner’s approach to combining both to reduce bias and enhance hiring efficiency.

Human Bias

Unconscious or implicit bias refers to the mental processes that lead individuals to reinforce stereotypes, often without realizing it. In recruiting, these biases can manifest in various ways:

  • Affinity Bias: Recruiters might prefer candidates who remind them of themselves or share similar backgrounds. This can lead to decisions based on personal affinity rather than objective criteria. For example, a hiring manager might unknowingly select a candidate who resembles them when they were younger.
  • Name Bias: Research shows that applicants with stereotypically white-sounding names are offered 50% more job interviews than those with African American-sounding names, despite having identical resumes.
  • Gender Bias: In tech roles, 74% of women report experiencing discrimination based on gender.
  • Race and Ethnicity Bias: Candidates may be judged based on perceived race or ethnicity indicated by their names or photos.
  • Age Bias: Older candidates might be overlooked in favor of younger ones, regardless of experience and qualifications.

These biases often surface quietly and subconsciously, affecting the initial screening stages when resumes are dismissed based on names, educational backgrounds, or even hobbies. For instance, a recruiter might choose a candidate because they attended the same university or share similar interests, rather than focusing on qualifications and skills.

AI Bias

Algorithmic Bias: Generative AI, for all its potential, can perpetuate latent biases inherited from human creators. A disconcerting echo of historical prejudices may inadvertently seep into the algorithms, reflecting past discriminatory practices. For example, if past hiring decisions were influenced by biases related to gender, age, faith, or race, an AI system might learn to replicate these patterns, inadvertently excluding qualified candidates from underrepresented backgrounds.

A notable example is Amazon’s AI-driven hiring model, which was found to favor male candidates for technical roles. This bias emerged from historical gender imbalances within the company and the broader technology sector. The algorithm, learning from biased data, perpetuated these biases, highlighting the importance of careful curation and monitoring of AI systems.

In another case, an AI resume screener was trained on the CVs of existing employees, leading to biases based on hobbies associated with successful male staff, such as baseball or basketball, while downgrading candidates who mentioned softball, typically played by women. These examples underscore the necessity of vigilance in preventing AI systems from amplifying existing biases and ensuring they promote diversity and inclusion.

Brainner’s Approach

Both humans and AI have their own biases, which is why a combination of both is essential to minimize bias in the recruitment process. Brainner resume screening software leverages AI to generate objective analyses, free from gender, class, or other biases, while also incorporating human judgment to provide context and final decision-making.

  • Objective Analysis: Brainner’s AI model processes, understands, analyzes, and extracts predefined criteria set by recruiters. This ensures that resumes are evaluated based on skills and qualifications, not subjective preferences.
  • Human Setup and Calibration: Brainner starts by allowing recruiters to define specific job criteria, such as years of experience in a particular position or proficiency in certain skills. The recruiter’s involvement in this part is crucial, as the criteria definition is key to getting a good output. Human supervision is essential in calibrating these criteria, enabling recruiters to weight which requirements are more important.
  • Human Supervision and Decision: While AI handles the initial analysis, human recruiters review the results, bringing context and insight that AI cannot provide. This combination helps ensure a balanced and fair evaluation process.
  • Explainability: Brainner’s scoring provides transparent explanations for why candidates are suggested, helping recruiters make informed decisions based on objective data and contextual understanding.

By combining the strengths of AI and human judgment, Brainner aims to create a more equitable and efficient recruiting process. AI handles the repetitive and data-intensive tasks, while humans focus on strategic decision-making, ultimately reducing bias and enhancing the quality of hires.


In the recruiting process, neither humans nor AI are free from bias. However, by understanding and addressing these biases, we can create a more fair and efficient hiring process. Brainner’s approach of combining AI’s objective analysis with human contextual understanding for CV screening provides a balanced solution, leveraging the best of both worlds to minimize bias and make smarter hiring decisions. Embracing this symbiotic relationship between AI and human recruiters can transform the recruitment landscape, promoting diversity and inclusivity while optimizing efficiency.

Save up to 40 hours per month

HR professionals using Brainner to screen candidates are saving up to five days on manual resume reviews.