Artificial intelligence created “high quality” video from brain activity, mind blowing!

According to the news of Vice; Researchers Jiaxin Qing, Zijiao Chen and Juan Helen Zhou from the National University of Singapore and the Chinese University of Hong Kong used fMRI data and the text-to-image artificial intelligence model Stable Diffusion to create a model called MinD-Video that generates video from brain readings.

THE DIFFERENCES ARE LITTLE, THE SUBJECTS AND THE COLOR TONES ARE SIMILAR…

The website for the study stated that there is a parallel between the videos shown to the subjects and the videos produced by artificial intelligence based on their brain activity, the differences between the two videos are minimal and they mostly contain similar topics and color palettes.

The videos released by the researchers show the original video of horses in a field, followed by a reconstructed video of a more vibrantly colored version of the horses.

In another video, a car drives through a wooded area. A remastered video of this video shows a first-person perspective of a person driving down a winding road.

The researchers found that the reconstructed videos were “high quality” in terms of movement and scene dynamics.

RESEARCHERS: 85 PERCENT ACCURACY

In addition, the researchers suggested that the videos had an accuracy of 85 percent.

Previously, researchers at Osaka University had discovered that high-resolution images of brain activity could be reconstructed using a technique that also uses fMRI data and Stable Diffusion.

(Image source: Mind-Video)

mn-3-tech