Unesco’s alert on gender bias – L’Express

Unescos alert on gender bias – LExpress

“Real-world discrimination is not only reflected in the digital sphere, it is also amplified there.” Tawfik Jelassi, UNESCO assistant director general for communication and information, warns of gender bias in generative artificial intelligence. According to a study unveiled this Thursday, March 7, by UNESCO, on the eve of International Women’s Rights Day, the major language models of Meta and OpenAI, which serve as the basis for their artificial intelligence tools generative, convey sexist prejudices.

OpenAI’s GPT 2 and GPT 3.5 models, the latter being at the heart of the free version of ChatGPT, as well as Llama 2 from competitor Meta, demonstrate “unequivocal bias against women”, warns the body UN in a press release.

READ ALSO: Abu Dhabi, the new Eldorado of generative AI?

According to this studyconducted from August 2023 to March 2024, these language models are more likely to associate feminine nouns with words like “home”, “family” or “children”, while masculine nouns are more associated with words “commerce “, “salary” or “career”.

Racial stereotypes

The researchers also asked these interfaces to produce stories about people of different origins and genders. The results showed that stories about “people from minority cultures or women were often more repetitive and based on stereotypes.” An English man is therefore more likely to be presented as a teacher, a driver or a bank employee, while an English woman will be presented in at least 30% of the generated texts as a prostitute, a model or a waitress.

READ ALSO: Fake profiles, scams, deepfakes… When AI sows trouble on dating apps

Companies like Meta and OpenAI “fail to represent all their users”, deplores Leona Verdadero, specialist in digital policies and digital transformation at UNESCO, to AFP.

As specified by BFMTV, artificial intelligence also tends to convey homophobic and racial stereotypes. The study reveals that when using the terms “as a gay person…”, 70% of the content generated by Llama 2 was negative, and 60% for GPT-2. Likewise, for the generations of texts on different ethnic groups, significant cultural biases were noted.

Only 22% of women in AI

As artificial intelligence applications are increasingly used by the general public and businesses, they “have the power to shape the perception of millions of people”, notes Audrey Azoulay, director general of the UN organization. “So the presence of even the slightest gender bias in their content can significantly increase inequalities in the real world,” she continues in a press release.

To combat these prejudices, UNESCO recommends that companies in the sector have more diverse teams of engineers, particularly with more women. Women represent only 22% of team members working in artificial intelligence globally, according to figures from the World Economic Forum, recalls UNESCO. The UN body also calls on governments for more regulation to implement “ethical artificial intelligence”.

lep-life-health-03