OpenAI employees warn about dangers of artificial intelligence

OpenAI signs content deal with Vox Media and The Atlantic


Current and former employees of OpenAI, the world’s most important artificial intelligence company that developed ChatGPT, warned about the dangers of artificial intelligence.

Several current and former employees of OpenAI and Google DeepMind speak out about the dangers of advanced AI and the lack of oversight of companies working on AI technology giving warnings shared an open letter. “We are current/former employees of leading AI companies and believe in the potential of AI technology to bring unprecedented benefits to humanity. At the same time, we are aware of the serious risks posed by these technologies. These risks range from further deepening of existing inequalities, to manipulation and misinformation, to autonomous AI systems losing control, and potentially to human extinction. AI companies themselves acknowledge these risks, as do governments and AI experts. “We hope that these risks can be reduced by the scientific community, politicians and governments.” In the letter stating, It was reported that artificial intelligence companies have great financial power to avoid inspections. Stating that artificial intelligence companies have and will refrain from sharing information about protective measures and risk levels, employees demand that more serious steps be taken against the risks posed and will be posed by artificial intelligence.

YOU MAY BE INTERESTED IN

OpenAI came to the fore last week with covert influence operations. As you know, productive artificial intelligence tools can be abused. Malicious actors are accelerating disinformation with artificial intelligence tools, and this poses great risks for everyone. Companies at the center of this issue OpenAI, explains that it actively combats covert influence operations. The company published a blog post, announced that they detected some accounts linked to Russia, China, Iran and Israel were manipulating public opinion..

According to the company, which states that they block accounts that spread harmful content, malicious people use ChatGPT and other OpenAI tools. created misleading social media content in multiple languages, produced names and biographies for its fake accounts with artificial intelligence and reached millions of people by preparing fake images. Russia-based among those blocked by OpenAI Bad Grammar with Doppelganger, based in china Spamouflage, Based in Iran International Union of Virtual Media (IUVM) and based in Israel Zero ZenoIt is announced that .

lgct-tech-game