diving into the making of algorithms that influence our careers – L’Express

the six essential keys to recruiting the right candidate –

The God of work exists. There are even a myriad of employment deities who influence human destinies every day from the top of their digital Olympus. Made up of mathematical formulas and artificial neurons, they decide to which candidates to display this unique job offer. Which hand-picked workers to recommend to this employer. A colossal responsibility. This is precisely why, in its bestiary of artificial intelligences, the European Ai Act regulations categorize them as “high risk”.

The Frenchman Malt knows these animals well, he has become an expert manufacturer. Logic: its job is to connect qualified freelancers – data scientists, software engineers, graphic designers, etc. – with companies. For a platform which today lists more than 700,000 freelancers, and serves 70,000 companies in 9 countries, it is vital to optimize connections as much as possible. Employees and employers alike are already frustrated by the time it takes to recruit for long-term contracts. In the world of freelancers, where missions are often shorter and more frequent, this would be prohibitive.

READ ALSO: How to make France an AI giant: instructions for use

Malt has therefore transformed its technological framework in recent years. In the past, analysis and matching freelancers on missions were carried out according to precisely laid down rules. “To put it simply, if the title, description or experiences contained certain keywords relevant to the mission, the profile rose in the ranking. If the price did not correspond to the target range, it fell,” explains Claire Lebarz, manager Data and AI from Malt.

THE match Perfect

To match the right profiles to the right offers, Malt has however perfected its system. “The idea is to project the missions and the freelancers into a mathematical space,” explains Claire Lebarz. “Then look at the distances between them.” What we call the match perfect is therefore more prosaic than we think: it is the shortest journey between two points.

But for the whole thing to work, the offers and talents must have been correctly analyzed and positioned. At this level, Malt has a major advantage: the group has a vast amount of data on the subject, accumulated since its creation in 2013: “All our match history. But also those which failed, missions refused, this allows us to refine our future recommendations and comments from our customers”, points out Claire Lebarz.

READ ALSO: Artificial intelligence: this start-up ready to tackle Europe’s Achilles heel

This pool of data allowed Malt to catch the wave of machine learning before diving into the deep end of deep learning. Intimidating technical icebergs between which Claire Lebarz navigates with ease. “In classic machine learning, we spend time coding and testing all the variables potentially relevant for the prediction,” she explains. “The model learns what weight to give them. In neural models, it does not only learn the weights in question, but transforms the initially coded variables into new variables relevant to the prediction.” To take a familiar illustration, imagine entering weight and height into a tool intended to predict whether someone is healthy. With a neural network, the model will likely learn, without being told, that body mass index – weight divided by the square of height – is a useful transformation to make its prediction.

AI, a magic cure for unemployment?

Today, Malt uses different in-house models but also external models, notably certain generative AI tools. “Having a project description analyzed by an LLM (a large language model) makes it possible to automatically extract the key skills corresponding to the experiences mentioned,” illustrates Claire Lebarz. The company is not the only one diving into AI.

France Travail, Indeed, LinkedIn… Whether public or private, all employment stakeholders upgrade and increase. At the same time, start-ups promising recruiters to pre-cut their work, using videos and games analyzed by AI, are multiplying like hotcakes. Behind, some cherish the sweet hope that technology could solve unemployment problems. Optimize supply and demand so well that people would be perfectly directed towards vacant positions, sectors in tension, training for professions of the future, suited to their skills and their secret aspirations. In 2016, France was excited about Paul Duan, a young geek working on these subjects, who hoped to be able to reduce unemployment by 10% using a simple algorithm.

READ ALSO: Artificial intelligence: with Llama 3, the perilous shift of Meta

In practice, AI in employment poses immense challenges. The arrival of tools such as ChatGPT is shaking up the benchmarks. On the other hand, these AIs allow candidates and employers to better construct their applications and offers. On the other hand, they make their task so simple that we can expect “digital channels to be flooded. And firms overwhelmed with CVs”, warn MIT researchers in a study published in March. Applications like LazyApply – literally “apply for lazy people” – allow Internet users to apply for thousands of job offers in a few hours. The program, which costs between $100 and $250, asks only some basic information about their skills, experiences and the positions they are aiming for. He then takes care of applying to the right and left in their place. “The signal of effort and interest” that constitutes the act of applying is “corrupt”, point out the MIT researchers. And the applications of all job seekers shaped by the same language models risk becoming “indistinct” from each other.

Another downside: the AI ​​is also wrong. The book The Algorithm by Hilke Schellmann (2024, untranslated), who has tested many of these tools, shows how certain unserious start-up programs endorse her, while she gives inept responses or gibberish in German when asked. asks about a level of English.

The real headache in the employment field, however, is that AIs have been trained on past recruitment histories. A harsh mirror of human prejudices. These biases give a disproportionate place to candidates from certain schools. At a restricted age range. To the male sex. With surnames without exoticism. When others, with equal skills, are invisible or almost invisible. These trends, unfortunately, do not escape the eye of the AI ​​which swallows our choices and draws very bad lessons from them.

“Poorly designed AI produces discrimination on a large scale”

In 2018, Amazon pulled the plug on a recruiting AI project that unnecessarily disadvantaged women’s CVs. A Bloomberg study also found that ChatGPT tended to disadvantage African-American-sounding names when asked to rank resumes with similar levels of education and experience. “A poorly designed AI can produce discrimination on a large scale, excluding people from opportunities to which they should have had access,” points out Yann Padova, lawyer at the Paris and Brussels bars, partner at Wilson Sonsini.

These biases are very difficult to detect. Because they sometimes hide behind an innocent variable: address, hobby… Even if we do not reveal to the AI ​​the sex of the candidates, it can for example “deduce” from past recruitments that those playing football – more popular among men – are those who are more likely to be hired. A few years ago, LinkedIn had to correct one of its systems which, by giving bonuses to the boldest candidates, had begun to excessively disadvantage female profiles with equal skills.

READ ALSO: Philippe Aghion: “AI will not create mass unemployment”

Hence the rise of work on the explainability of AI. “This is very complex research but it must be carried out to be able to give the keys to understanding the results to users,” argues Claire Lebarz. Malt has also just recruited an expert with a doctorate on the subject. The AI ​​Act will soon pose “a demanding security framework for artificial intelligence publishers”, recalls lawyer Yann Padova.

Detecting problems, however, is only a prerequisite. Then you have to find the right remedy. “What constitutes an acceptable experience? On the one hand, our algorithms already make it possible to reduce recruiter bias. On the other hand, what corrections should be made to existing biases in the data? In areas where there are more men than women, should we make the decision to boost the visibility of the latter and if so, to what ratio?”, underlines the Data and AI manager at Malt. The company has set up a Freelance Advisory Board to reflect on its development. She also received funding from the European Commission to explore these difficult dilemmas with Erasmus University in Rotterdam, and launched postdoctoral recruitment on this subject. These are all issues that society as a whole will ultimately have to address and make difficult choices. Face what is left unsaid, and translate its concept of equality into detailed mathematical formulas.

.

lep-general-02