Vision Reconstruction

Описание к видео Vision Reconstruction

Using Hollywood movie trailers, UC Berkeley researchers have succeeded in decoding and reconstructing people's dynamic visual experiences.

The brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images.

Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds.

The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.
Full story: http://newscenter.berkeley.edu/2011/0...
Video produced by Roxanne Makasdjian, UC Berkeley Media Relations

Комментарии

Информация по комментариям в разработке