Table of contentsClick link to navigate to the desired location
This content has been automatically translated from Ukrainian.
Imagine: you are an HR in a product IT company. Hundreds of resumes – deadline "yesterday". Suggestion: «Connect AI – will do everything yourself». Convenient? Yes. But if the bot cuts out a strong candidate or reads the face without consent — responsibility on you.
In 2024, the British ICO regulator conducted an audit of companies that use AI during the recruitment stages. The results revealed a number of risks: automatic refusals without human involvement, lack of understandable information and collection of sensitive personal data without consent. Ukrainian legislation does not yet contain special norms regarding AI, but practice is confidently moving towards more responsible use.
IT lawyersStalirov&Co shared practical observations – how the use of recruiting AI in the IT business can lead to legal complications and how to avoid it.
Key risks: what you need to know about the legality of AI
Before entrusting AI with part of recruiting, it is worth evaluating the possible legal consequences. This is what lawyers advise to take into account for IT projects:
Discrimination and prejudice
AI learns from the past. If previously hired mostly men – the system will repeat it. Thus, Amazon had to abandon its own hiring tool after a sexist scandal: AI systematically lowered the rating of female candidates if there were references to women's colleges or the experience of participating in women's communities in the resume.
Data privacy
Many tools process biometric data – facial expressions, voice, emotions. In the EU, Canada, Australia, the USA and Ukraine, such information is considered sensitive. Its processing is allowed only with direct consent.
Lack of transparency (Black Box AI)
«Black box» – is an algorithm whose solution is difficult or impossible to explain. If the AI failed —, you won't be able to explain to the candidate why. But according to Art. 22 of the GDPR (European Regulation on the Protection of Personal Data), a person has the right not to be subject to a decision based exclusively on automated processing, if it has legal consequences or significantly affects him. In Ukraine, similar principles are established through judicial practice: the candidate has the right to review such decisions —, otherwise legal risks increase.
How to reduce the likelihood of a complaint
To minimize the risks faced by IT lawyers in practice, you should follow the following steps:
- Leave the last word to the person. Do not make final decisions without the participation of HR.
- Conduct DPIA (Data Protection Impact Assessment) <TAG1> impact assessment on personal data. This is important to do when you implement a third-party product or order the development of your own software. In both cases, your company acts as a «controller» —, it is you who decide why and how personal data is processed. Some of the technical processing operations can be delegated to an external contractor –, the so-called «processor». However, even in this case, the main obligation to comply with the requirements of personal data protection remains with the controller: if the instructions are unclear, the security level — is insufficient, or the data will be used outside the specified purpose, the responsibility rests with you.
- Just-in-time message. If your AI system uses video interview or video capture —, it actually collects biometric data. And this is – sensitive information. To avoid forcing candidates to flip through the five pages of the Privacy Policy, give a short message at the time of collection: «This part of the process involves video recording. Consent?».
- Formalize documents. The terms of use of AI must be described in agreements, contracts and public offers.
- Choosing an AI provider: due diligence. When a company decides to implement an AI system for recruiting, choosing the right supplier should not be based only on functionality and price criteria. It is important to take into account legal risks, transparency and compliance with future legal requirements. At the stage of negotiations with the supplier, pay attention to the following:
- Is it possible to explain the logic of the decisions made?
- Is there a mechanism for contesting or reviewing the results?
- What are the conditions regarding intellectual property? Who owns the algorithm, the data collected and the results of the analysis? In some cases, standard terms assume that everything belongs to the – provider, which means losing control over the system.
- Who has the rights to the code? Without access to it, your team will not be able to adapt the tool or scale solutions in the future –, for example, for a new vacancy or within the scope of other internal development.
Conclusion
Recruiting AI systems – are not only about saving time, but also about balancing automation and responsibility. Excessive trust in algorithms without human control can turn into complaints, fines and lost reputation.
To avoid such risks, it is worth building the process from the very beginning on clear principles:
- with human control,
- transparent informing of candidates,
- risk Assessment (DPIA),
- legal support of it projects.
Teams that take these principles into account at the start reduce risks and build trust – from both candidates and regulators.
Author: Valery Stalirov, CEO of the IT lawyer company Stalirov&Co
This post doesn't have any additions from the author yet.