Researchers Create Neural Video Reconstruction from Rodent's
%3Aformat(jpg)%3Aquality(99)%3Awatermark(f.elconfidencial.com%2Ffile%2Fbae%2Feea%2Ffde%2Fbaeeeafde1b3229287b0c008f7602058.png%2C0%2C275%2C1)%2Ff.elconfidencial.com%2Foriginal%2Fbb0%2F527%2Fea1%2Fbb0527ea188d2f23c71fd4ea39777e61.jpg&w=1920&q=75)
Key Points
- 1Neuroscientists reconstruct video seen by mice from brain signals
- 2New AI model DwiseNeuro enhances visual encoding capabilities
- 3Method reduces reliance on generative AI by capturing raw neural data
- 4Neuroscientists reconstruct video seen by mice from brain signals • New AI model DwiseNeuro enhances visual encoding capabilities • Method reduces reliance on generative AI by capturing raw neural data
A research team has launched a groundbreaking approach by directly connecting a mouse's brain to a computer, enabling real-time video reconstruction of what the animal observes. Using advanced calcium imaging, approximately 8,000 neurons in the primary visual cortex were recorded while awake. The newly developed AI model, named DwiseNeuro, iteratively modifies a grey noise video to match the neural activity, achieving a pixel-level correlation of 0.57, significantly surpassing previous attempts that averaged around 0.24.
This achievement signals a shift in how visual information is interpreted in neuroscience. Unlike traditional methods that rely on fMRI technology and generative AI, this model offers a direct representation of the brain's interpretation of stimuli without intermediaries. The findings could impact how cognitive neuroscience studies perception and consciousness, leading to improved techniques for understanding brain activity and neurological disorders, thereby expanding potential applications within AI and neuroscience research.
Free Daily Briefing
Top AI intelligence stories delivered each morning.
Related Articles

ARC Prize Analysis Reveals AI Models' Systematic Errors

CERN Discovers Anomaly in Particle Decay at LHC
KPR Institute Develops Hybrid Model for Health Monitoring
