A 3D Convolutional Neural Network to Model Retinal Ganglion Cell’s Responses to Light Patterns in Mice
Abstract
Deep Learning offers flexible powerful tools that have advanced our understanding of the neural coding of neurosensory systems. In this work, a 3D Convolutional Neural Network (3D CNN) is used to mimic the behavior of a population of mice retinal ganglion cells in response to different light patterns. For this purpose, we projected homogeneous RGB flashes and checkerboards stimuli with variable luminances and wavelength spectrum to mimic a more naturalistic stimuli environment onto the mouse retina. We also used white moving bars in order to localize the spatial position of the recorded cells. Then recorded spikes were smoothed with a Gaussian kernel and used as the output target when training a 3D CNN in a supervised way. To find a suitable model, two hyperparameter search stages were performed. In the first stage, a trial and error process allowed us to obtain a system that is able to fit the neurons firing rates. In the second stage, a systematic procedure was used to compare several gradient-based optimizers, loss functions and the model’s convolutional layers number. We found that a three layered 3D CNN was able to predict the ganglion cells firing rates with high correlations and low prediction error, as measured with Mean Squared Error and Dynamic Time Warping in test sets. These models were either competitive or outperformed other models used already in neuroscience, as Feed Forward Neural Networks and Linear-Nonlinear models. This methodology allowed us to capture the temporal dynamic response patterns in a robust way, even for neurons with high trial-to-trial variable spontaneous firing rates, when providing the peristimulus time histogram as an output to our model.
References
- 1. H. Wen, J. Shi, Y. Zhang, K.-H. Lu and Z. Liu, Neural encoding and decoding with deep learning for dynamic natural vision, arXiv:1608.03425. Google Scholar
- 2. , Sparse deep belief net model for visual area V2, in Adv. Neural Inf. Process. Syst. 20 (2008) 873–880. Google Scholar
- 3. . Learning spatiotemporal features with 3D convolutional networks, in Proc. IEEE Int. Conf. Comput. Vis. 2015,
Santiago, Chile , pp. 4489–4497. Google Scholar - 4. , Spatio-temporal correlations and visual signalling in a complete neuronal population, Nature 454 (2008) 995–999. Medline, Web of Science, Google Scholar
- 5. , A review of the integrate-and-fire neuron model: I. homogeneous synaptic input, Biol. Cybern. 95(1) (2006) 1–19. Medline, Web of Science, Google Scholar
- 6. , A simple white noise analysis of neuronal light responses, Network: Comput. Neural Syst. 12 (2001) 199–213. Medline, Web of Science, Google Scholar
- 7. , Fast and slow contrast adaptation in retinal circuitry, Neuron 36(5) (2002) 909–919. Medline, Web of Science, Google Scholar
- 8. , Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model, J. Neurosci. 25(47) (2005) 11003–11013. Medline, Web of Science, Google Scholar
- 9. , Encoding of natural scene movies by tonic and burst spikes in the lateral geniculate nucleus, J. Neurosci. 24(47) (2004) 10731–10740. Medline, Web of Science, Google Scholar
- 10. , Predicting every spike: A model for the responses of visual neurons, Neuron 30(3) (2001) 803–817. Medline, Web of Science, Google Scholar
- 11. , Predicting neuronal responses during natural vision, Netw. Comput. Neural Syst. 16(2–3) (2005) 239–260. Medline, Web of Science, Google Scholar
- 12. , Contrast adaptation in subthreshold and spiking responses of mammalian y-type retinal ganglion cells, J. Neurosci. 25(4) (2005) 860–868. Medline, Web of Science, Google Scholar
- 13. , Deep learning models of the retinal response to natural scenes, in Adv. Neural Inf. Process. Syst. Vol. 29 (Barcelona, Spain, 2016), pp. 1369–1377. Google Scholar
- 14. ,
On the automatic tuning of a retina model by using a multi-objective optimization , in Artificial Computation in Biology and Medicine (Elche, Spain, 2015), pp. 108–118. Google Scholar - 15. , Modelling retinal feature detection with deep belief networks in a simulated environment, in Proc. of the European Council for Modelling and Simulation, (Brescia, Italy, 2014), pp. 364–370. Google Scholar
- 16. , Delving deep into rectifiers: Surpassing human-level performance on imageNet classification, in Proc. IEEE Int. Conf. Comput. Vis. (Santiago, Chile, 2015), pp. 1026–1034. Google Scholar
- 17. , Dropout: A simple way to prevent neural networks from overfitting, J. Mach. Learn. Res. 15 (2014) 1929–1958. Web of Science, Google Scholar
- 18. , Benchmarking State-of-the-Art deep learning software tools, in 7th Int. Conf. on Cloud Computing and Big Data (Macau, China, 2016), pp. 99–104. Google Scholar
- 19. , ImageNet classification with deep convolutional neural networks, in Adv. Neural Inf. Process. Syst. 25 (Lake Tahoe, Nevada, 2012) 1097–1105. Google Scholar
- 20. J. I. Glaser, R. H. Chowdhury, M. G. Perich, L. E. Miller and K. P. Kording, Machine learning for neural decoding, arXiv:1708.00909. Google Scholar
- 21. , Deep learning, Nature 521(7553) (2015) 436–444. Medline, Web of Science, Google Scholar
- 22. , Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position, Biol. Cybern. 34(4) (1980) 193–202. Web of Science, Google Scholar
- 23. , Multilayer recurrent network models of primate retinal ganglion cell responses, in Proc. Int. Conf. on Learning Representations (Toulon, France, 2017), pp. 1–12. Google Scholar
- 24. ,
Towards the reconstruction of moving images by populations of retinal ganglion cells , in Artificial Computation in Biology and Medicine (Elche, Spain, 2015), pp. 220–227. Google Scholar - 25. . Population coding in spike trains of simultaneously recorded retinal ganglion cells, Brain Res. 887(1) (2000) 222–229. Medline, Web of Science, Google Scholar
- 26. NEural SOrter a tool for offline electrophysiological recording analysis, http://soruceforge.net/projects/neuralsorter. Google Scholar
- 27. , A computational framework for realistic retina modeling, Int. J. Neur. Syst. 26(7) (2016) 16500301. Web of Science, Google Scholar
- 28. , Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (The MIT Press, Cambridge, MA, 2005). Google Scholar
- 29. ,
Efficient backprop , in Neural Networks: Tricks of the Trade, eds. G. B. Orr and K.-R. Müller (Verlag Berlin Heidelberg, 1998), pp. 9–50. Google Scholar - 30. , Feature selection, L1 vs. L2 regularization, and rotational invariance, in Proc. of the Twenty-first International Conf. on Machine Learning (Banff, Alberta, Canada, 2004), pp. 78–86. Google Scholar
- 31. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. Corrado, A. Davis, J. Dean, S. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, W. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu and X. Zheng, TensorFlow: large-scale machine learning on heterogeneous distributed systems, in Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Learning (2015), https://www.tensorflow.org/about/bib. Google Scholar
- 32. F. Chollet, Keras (2015), https://github.com/fchollet/keras (accessed March, 2017). Google Scholar
- 33. FloydHub, https://www.floydhub.com (accessed March, 2017). Google Scholar
- 34. A. S. Benjamin, H. L. Fernandes, T. Tomlinson, P. Ramkumar, C. VerSteeg, R. Chowdhury, L. Miller and K. P. Konrad, Modern machine learning outperforms GLMs at predicting spikes, bioRxiv 111450 (2017), doi: https://doi.org/10.1101/111450. Google Scholar
- 35. , Multilayer feedforward networks are universal approximators, Neural Networks 2(5) (1989) 359–366. Web of Science, Google Scholar
- 36. , Dynamic programming algorithm optimization for spoken word recognition, in IEEE Trans. on Acoustics, Speech, and Signal Processing 26(1) (1978) 43–49. Google Scholar
- 37. , Toward accurate dynamic time warping in linear time and space, Intelligent Data Analysis 11(5) (2007) 561–580. Web of Science, Google Scholar
- 38. Slaypni, A Python implementation of FastDTW (2015), https://github.com/slaypni/fastdtw (accessed July, 2017). Google Scholar
- 39. , Pyret: A Python package for analysis of neurophysiology data, J. Open Source Software 2(9) (2017), Miscellaneous Resource. Medline, Google Scholar
- 40. , Automatic mapping of visual cortex receptive fields: A fast and precise algorithm, J. Neurosci. Methods 221 (2014) 112–126. Medline, Web of Science, Google Scholar
- 41. , Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems, Behav. Brain Sci. 24(5) (2001) 793–810. Medline, Web of Science, Google Scholar
- 42. , Uncorrelated neural firing in mouse visual cortex during spontaneous retinal waves, Front. Cell. Neurosci. 11 (2017) 289. Medline, Web of Science, Google Scholar
- 43. , The benefits of noise in neural systems: Bridging theory and experiment, Nat. Rev. Neurosci. 12 (2011) 415–426. Medline, Web of Science, Google Scholar
- 44. , Measuring the performance of neural models, Front. Computat. Neurosci. 10 (2016), https://doi.org/10.3389/fncom.2016.00010. Medline, Web of Science, Google Scholar
- 45. , Time-frequency analysis of neuronal populations with instantaneous resolution based on noise-assisted multivariate empirical mode decomposition, J. Neurosci. Methods 267 (2016) 35–44. Medline, Web of Science, Google Scholar
- 46. , Neural networks for efficient Bayesian decoding of natural images from retinal neurons, in Adv. Neural Inf. Process. Syst. (California, United States, 2017), pp. 6437–6448. Google Scholar
- 47. , Random search for hyper-parameter optimization, J. Mach. Learn. Res. 13(1) (2012) 281–810. Google Scholar
- 48. , Construction and analysis of non-Poisson stimulus-response models of neural spiking activity, J. Neurosci. Methods 105(1) (2001) 25–37. Medline, Web of Science, Google Scholar
- 49. . Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding, in Proc. Int. Conf. on Learning Representations (San Juan, Puerto Rico, 2016), pp. 1–14. Google Scholar
- 50. , Generative adversarial nets, in Advances in Neural Information Processing Systems Vol. 27 (Montreal, Canada, 2014), pp. 2672–2680. Google Scholar
- 51. , Dynamic routing between capsules, in Adv. Neural Inf. Process. Syst. (California, United States, 2017), pp. 1369–1377. Google Scholar
- 52. , Spiking neural networks, Int. J. Neural Syst. 19(4) (2009) 295–308. Link, Web of Science, Google Scholar
- 53. , Virtual retina: A biological retina model and simulator, with contrast gain control, J. computat. Neurosci. 26(2) (2009) 219–249. Medline, Web of Science, Google Scholar
- 54. . PRANAS: A new platform for retinal analysis and simulation, Frontiers in Neuroinf. 11(49) (2017), https://doi.org/10.3389/fninf.2017.00049. Medline, Web of Science, Google Scholar
- 55. , Deep Learning with Python, 1st edn. (Manning Publications Co., Greenwich, CT, USA, 2017). Google Scholar
- 56. , Structural damage detection with automatic Feature-extraction through deep learning, Comput.-Aided Civil Infrastruct. Eng. 32(12) (2017) 1025–1046. Web of Science, Google Scholar
- 57. , Ensembles of deep learning architectures for the early diagnosis of Alzheimers disease, Int. J. Neural Syst. 26(7) (2016) 1650025 (23 pages). Web of Science, Google Scholar
- 58. , Deep learning representation from electroencephalography of early-stage Creutzfeld-Jakob disease and features for differentiation from rapidly progressive dementia, Int. J. Neural Syst. 27(2) (2017) 1650039 (15 pages). Link, Web of Science, Google Scholar
- 59. , Image recognition with deep neural networks in presence of noise dealing with and taking advantage of distortions, Integrat. Comput. Aided Eng. 24(4) (2017) 337–350. Web of Science, Google Scholar
- 60. , ALAMO: FPGA acceleration of deep learning algorithms with a modularized RTL compiler, Integration, 62 (2018) 14–23. Web of Science, Google Scholar
- 61. , Layer multiplexing FPGA implementation for deep back-propagation learning, Integrat. Comput.-Aided Eng. 24(2) (2017) 171–185. Web of Science, Google Scholar
- 62. , Deep convolutional neural network for the automated detection of seizure using EEG signals, Comput. Biolog. Med. 92 (2018) 270–278. Web of Science, Google Scholar
- 63. , Learning spatiotemporal features with 3D convolutional networks, in Proc. of the 32nd Int. Conf. on Machine Learning, Vol. 37 (Lille, France, 2015), pp. 448–456. Google Scholar
Remember to check out the Most Cited Articles! |
---|
Check out our titles in neural networks today! |