AI Constructs a ‘High Quality’ Video Directly From Brain Readings In Study
A new study reveals that researchers have used generative AI in order to reconstruct \”high quality\” videos from brain activity.
Researchers JiaxinQing, ZijiaoChen, and Juan Helen Zhou, from The Chinese University of Hong Kong and The National University of Singapore, used text-to image AI model Stable Diffusion and fMRI data to create MinD-Video, a model that generates videos from brain readings. The paper that describes the work has been posted on the arXiv Preprint Server last week.
On the website of the paper, they demonstrate a similarity between the videos shown to the subjects and the AI generated videos created using their brain activity. The difference between the videos is minimal and they are largely similar in terms of color palettes and subjects.
Source:
https://www.vice.com/en/article/k7zb3n/ai-reconstructs-high-quality-video-directly-from-brain-readings-in-study?utm_source=vice_facebook&utm_medium=social