Application
Machine learning software for brain-machine interface and neuro-stimulation device-controlling.
Key Benefits
- Significantly improves machine learning performance by minimizing overfitting.
Market Summary
In machine learning (ML), autoencoder, a type of unsupervised artificial neural network, attempts to learn efficient data coding. It has emerged as a powerful tool in understanding complex neural activities, and brain-machine interface is gaining interest potentially as a life changing technology. Improving performance of such neural networks to uncover complex structures, connectivity and patterns from noisy, variable data with higher accuracy is in high demand. Overfitting and underfitting are the two biggest causes for poor performance of machine learning algorithms. Researches have been trying to develop new training methods to address those problems.
Technical Summary
Emory inventors developed a Latent Factor Analysis via Dynamical System (LFADS), a sequential autoencoder for deep learning, to extract dynamic factors affecting the neural population in a time series neural data and use them to infer kinematics (motion) correlated with firing rate or spiking data for the recorded neurons. A new training method for an autoencoder called "Coordinated Dropout (CD)" developed by inventors further improves neural data processing of LFADS. CD prevents overfitting by forcing the autoencoder to learn the shared, underlying correlations among input data. This algorithm significantly improves performance of neural networks with high accuracy.
Developmental Stage
The LFADS autoencoder and new training method, CD, have been developed.
Publication: Pandarinath, C. et al. (2018) Nature Methods 15, 805-815.