Sam Altman thinks too broadly for ChatGPT and the like

Sam Altman thinks too broadly for ChatGPT and the like


ChatGPT from the creators Sam Altman, He thinks big about artificial intelligence. In this regard, the focus is also on the health side.

ChatGPT located behind OpenAI co-founder of the firm Sam Altman, He made remarkable statements on Twitter. “Adaptation to a world where AI tools are integrated is likely to happen quite quickly.who said Altman, About ChatGPT-like systems “These tools are more productive (I can’t wait to spend less time emailing!), healthier (AI-centric medical counselors for people who can’t afford it), smarter (Students using ChatGPT to learn), and more fun (AI-generated memes) ‘s) will help us to be.” its description made. Here, the health aspect is particularly noteworthy. Because in terms of health, Google and its results are often not very helpful. But ChatGPT and the like can really change that, giving people much more accurate information. At the moment, even the slightest negativity in Google results “cancer” There’s a good chance you’re dating, and it affects most people pretty badly. Altman makes another important statement regarding the future of artificial intelligence: “The audit of these systems will be critical and it will take time to fully resolve this. While the current generation AI tools aren’t terrifying, they are potentially I think we are not far from the very scary ones.

YOU MAY BE INTERESTED

In addition to ChatGPT, DALL-E made a lot of noise. OpenAI, before this Here with the new tool He laid the first foundations of an infrastructure that would be needed in the future. of the firm “We are developing a new tool to help distinguish between artificial intelligence or human-written text. We are releasing the first version to collect feedback and hope to share improved methods in the future.” the vehicle announced with the description, for now it doesn’t work with extreme high accuracy. To reliably detect all texts written by artificial intelligence company that sees it as impossiblethat such a tool can still be used against automated misinformation campaigns and academic text-driven fraud. states.

The tool we developed is not completely reliable“The company says the following about this issue: “In our experiments with a sample text of English texts, our tool correctly identified 26 percent of AI-written texts as “probably AI-written.” The system mistook human-written texts at a rate of 9 percent and marked them as written by artificial intelligence.” The system, which will continue to be worked on, can be actively used in the future for the detection of works, assignments and more written to ChatGPT.



lgct-tech-game