Gender Bias in AI Recruitment Systems

Gender Bias in AI Recruitment Systems

larcombe

When biases are included into ostensibly neutral machine learning algorithms during various stages of an AI’s development, disparities in the real world can become apparent.

Let’s use an AI/ML hiring tool in the tech recruitment industry as an example. Automation is becoming more and more prevalent in the hiring process. Raw data (CVs) are supplied to machine learning systems so they can learn and predict outcomes. What transpires, though, when the vast majority of CVs come from males in businesses where men already dominate?

Historical discrimination feeds the algorithm, producing a feedback loop where new systems repeat past behaviors. The machine learns to make biased conclusions by being fed biased data.

Human design choices regarding algorithm development, data labeling, and modeling can also introduce bias because the team creating these tools brings their own values, objectives, and assumptions to the table.

Because humans developed them, AI systems are already prejudiced. Decisions to enhance the data may have assured more inclusive representation if a team member had realised that the CVs underrepresented minority groups. When the tool is used, bias also enters. Fewer women apply for computer positions, which reflects both the atmosphere of these organizations and the tech talent pool. This reinforces the feedback loop whereby gender bias develops into the broader AI ecosystem, of which the tool is only a part.

Every technology we use daily has gender and other prejudices built into its underpinnings, from gendered streaming service viewing recommendations and adverts to possibly more destructive algorithms determining who receives welfare approval, who is charged with crimes, or who is given a sickness diagnosis. Of course, the owners of these systems would deny, deny, deny! After all, it would be bad to be racially profiled, right?

Similar to human bias, algorithmic bias can produce unfair conclusions that are widely disseminated quickly. The underlying design of AI caused these biases rather than malicious intent.

As society is still far from being equitable, the road to a just and fair AI is lengthy.

There are many viable answers. These include broadening diversity in the IT sector, creating techniques for both technical and non-technical bias prevention, and creating frameworks for AI ethics and governance. It is imperative that we take action now to prevent defective technology from being permanently ingrained into society by establishing democratic, participatory approaches for fair, ethical, and responsible AI design.

Everyone should be able to use technology, and we have the chance to shape it to create a better, more inclusive world.