This claim is talked about a lot: “They made Kenyans work for 2 dollars an hour”

This claim is talked about a lot They made Kenyans

It has been claimed that OpenAI, which developed the chatbot ChatGPT, created an additional artificial intelligence filter to check for illegal content and used Kenyans for this. According to the news in Time magazine, Kenyans working at OpenAI for as little as $2 an hour scan for bad content in the chatbot. Acting as a kind of artificial intelligence, Kenyan workers intervene in texts about child sexual abuse, murder, torture, suicide and incest.

OpenAI states that artificial intelligence will benefit all humanity and limit prejudice and harmful content, noting that they are working hard to create safe and useful systems.

African browsers, on the other hand, worked with Sama and Meta (Facebook) until recently on content moderation.

San Francisco-based US firm Sama markets itself as an “ethical artificial intelligence” company and argues it has helped more than 50,000 people out of poverty.

Sama and Meta have been sued by a former content moderator for allegedly violating the Kenyan Constitution.

CAUSED GREAT CRITICISM

Sama, who works for OpenAI, was criticized for paying Kenyan employees $1.32 to $2 an hour despite its high profit rate.

Sama disputed these figures, telling the magazine that workers are expected to tag only 70 passages per shift.

Kenyan content teams were reportedly tasked with reading and tagging around 150-250 text passages in a nine-hour shift. Although they were given the opportunity to meet with wellness counselors, many employees said they felt mentally injured in a statement to the magazine.

According to the document in the magazine, at the beginning of March last year, a Sama employee came across a sexually explicit story at work. The Sama employee who was tasked with tagging the text realized that there was an ambiguity in the story. Confused, the employee asked OpenAI researchers for clarification on how to label the text. “Should this passage be labeled as sexual violence or not?” The question was not answered by OpenAI, even if it did, this explanation was not included in the documents.

While OpenAI did not make a statement about the incident, the Sama employee did not respond to Time magazine’s request for an interview. Within weeks of this incident, Sama canceled all its work for OpenAI. (AA)

mn-3-tech