‘Kept me awake at night’: The ethical considerations of AI in recruitment

ai in recruitment human service

Source: Adobe Stock

Within the next year, if you apply for a new job or a promotion it’s likely your application will initially be screened by an artificial intelligence (AI) tool.

It won’t be making the decision by itself. But it will be providing the recruiter or employer with key insights about your personality, skills and aptitude for the role, based on data from countless other sources such as interviews, assessments and social networks. It will rank you against other candidates, which could affect your chances at landing that role.

What may be once considered the realm of science fiction, where computers help humans make better decisions — is fast becoming a reality. One of the latest studies on adoption — a 2022 survey conducted by the Society of Human Resources Management — found that of the 42% of businesses are already adopting AI in HR functions, 79% of them are considering it for use in their recruitment process. AI adoption is moving at lightspeed, so while this was recorded just last year, this data may already be out of date.

We shouldn’t shy away from this either. While there are obvious gains in productivity, we know that further introduction of AI into HR and recruitment is set to benefit minority groups who are currently rejected due to unconscious biases by prospective employers.

Buddhi Jayatilleke is the chief data scientist of Sapia.ai. Source: Supplied

But this isn’t like companies adopting Microsoft Excel or any other productivity tool. It shouldn’t be unequivocally adopted without question. There’s ethical implications woven into AI’s rise that can’t be ignored, especially when it comes to recruitment and HR.

Three fundamental ethical considerations

In a recent whitepaper, we outlined how AI-based systems are different from traditional software and why ethical considerations are important for the further rollout of AI into HR. We listed nine key ethical considerations. But, for me, three of them stand out as being fundamentally tied to this particular implementation of AI.

The first is transparency and explainability. There shouldn’t be any tricks involved in using AI tools to screen candidates; the system needs to be able to explain its decisions to both candidates and stakeholders. This is key to building trust among candidates. To do this, there needs to be transparency as to how the AI reached its determined outcome. Unlike typical software solutions, where specifications are created by their developers, AI systems learn their own rules for decisions from large datasets. Deep complex learning models make it difficult to know what these rules are in their entirety. Therefore it is crucial that developers of AI are transparent about the data being used, how it is sourced and the algorithms used in building their models, and be able to explain the outcomes.

Second, AI-based recruitment tools should be inclusive and accessible. Use of these tools should help employers broaden their prospective talent pool, rather than shrink it. We already know that AI will help in the creation of opportunities for minorities, who are usually screened out of the recruitment process due to unconscious bias.

An important point here is that AI is not a uniform entity. It comes in different forms and is typically embedded in a broader recruitment platform that uses a predefined type of data — such as keywords from your resume or text data transcribed from a video interview — as input. Hence it is important to think of the inclusivity of the whole system that’s being used and not just the AI tool contained within it. This means testing for more than any biases in the AI outcomes and assessing for things like candidate satisfaction, completion times and dropout rates across demographic groups, indicative of inclusivity.

Lastly, all AI recruitment tools should have firm human oversight and control. The responsible development and use of AI requires human interaction and decision-making at key stages of the process, such as the selection of data and the uptake of the AI recommendations. AI-based recommendations are just that, recommendations. They shouldn’t be accepted blindly without a level of human decision-making and scrutiny. As we’ve seen with current iterations of ChatGPT or other AI tools, they are not infallible. Recruitment is very much a human process where the final outcomes must be made by humans and not AI.

Not just a problem for AI developers

Developers and those building such tools already carefully consider the ethics of their creations. I can speak from experience: they’ve kept me awake at night thinking about how we design our product. But broader awareness of them is needed to ensure that all users are on the same page when it comes to utilising AI-based tools in recruitment.

Using AI, we can turn hours recruiters spend screening talent into minutes. We can also transform a cumbersome candidate experience into a reflective learning opportunity that candidates love. Its benefits are substantial, but if mismanaged, could lead to poor outcomes or an unintentional backlash against these tools.

Your next job application process may be assisted by an AI and with the right ethical framework in place, this is something you should look forward to.

Buddhi Jayatilleke is the chief data scientist of Sapia.ai.

COMMENTS