Will Smith pokes fun at viral ‘AI’ production video

Will Smith pokes fun at viral AI production video


In the past year generative artificial intelligence one prepared by Will Smith The video made an incredible impact.

The biggest agenda topic in the technology world artificial intelligence continues to happen. New generation “productive” Systems in this structure open new doors both technically and visually, including video. Here is one of the most popular contents on this subject last year: Will Smith It was a centered video. This video made a lot of noise because it was both very funny and scary. This is in the video you can watch below. Smith’s eating spaghetti was simulated in an incredibly strange way, and the video specifically “Chaindrop” From Reddit user named was coming. At that time, it was said that 10 independently created two-second segments were brought together for this 20-second video. Each episode showed a simulated Will Smith greedily eating spaghetti from different angles, and behind the process was an infrastructure under development. It was reported that the video was prepared on the artificial intelligence tool called ModelScope prepared by DAMO Vision Intelligence Lab, a research division of Alibaba.

YOU MAY BE INTERESTED IN

By analyzing millions of photos and thousands of videos in databases such as ModelScope, “LAION5B, ImageNet and Webvid”, A system trained to create videos from written texts. “text2video” was based on the model. This system even analyzes videos from Shutterstock, so Shutterstock’s logo for protection also appeared in the shared video. What brought this video to the agenda now is the video shot by Will Smith, which you can see below. video happened. This “real” Smith, who made fun of the artificial intelligence-generated content that went viral in the video, managed to make people laugh once again.

OpenAIyou know, the one who visualized what was written in the past few days DALL-E Later, he also turned his hand to video production. The company’s new artificial intelligence model “Sora”, It can produce 60-second videos from text and compared to its competitors It produces much, much better results. The company entered this field a little late but has prepared a very ambitious infrastructure. He states that Sora will be opened to certain people in the first stage against possible security risks. The company says that they will make a public release later in order to prevent misuse of the system, and states that Sora can produce complex scenes with a large number of characters / people / creatures / objects. The first examples shown confirm this.. The system that blends people’s wishes with physical reality is not yet perfect and OpenAI also acknowledges this, making it clear that there are some weaknesses. Of course, it will be improved over time, like DALL-E soraalready reveals a huge future potential.



lgct-tech-game