World Scientific
  • Search
  •   
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

A 3D Convolutional Neural Network to Model Retinal Ganglion Cell’s Responses to Light Patterns in Mice

    https://doi.org/10.1142/S0129065718500430Cited by:11 (Source: Crossref)

    Deep Learning offers flexible powerful tools that have advanced our understanding of the neural coding of neurosensory systems. In this work, a 3D Convolutional Neural Network (3D CNN) is used to mimic the behavior of a population of mice retinal ganglion cells in response to different light patterns. For this purpose, we projected homogeneous RGB flashes and checkerboards stimuli with variable luminances and wavelength spectrum to mimic a more naturalistic stimuli environment onto the mouse retina. We also used white moving bars in order to localize the spatial position of the recorded cells. Then recorded spikes were smoothed with a Gaussian kernel and used as the output target when training a 3D CNN in a supervised way. To find a suitable model, two hyperparameter search stages were performed. In the first stage, a trial and error process allowed us to obtain a system that is able to fit the neurons firing rates. In the second stage, a systematic procedure was used to compare several gradient-based optimizers, loss functions and the model’s convolutional layers number. We found that a three layered 3D CNN was able to predict the ganglion cells firing rates with high correlations and low prediction error, as measured with Mean Squared Error and Dynamic Time Warping in test sets. These models were either competitive or outperformed other models used already in neuroscience, as Feed Forward Neural Networks and Linear-Nonlinear models. This methodology allowed us to capture the temporal dynamic response patterns in a robust way, even for neurons with high trial-to-trial variable spontaneous firing rates, when providing the peristimulus time histogram as an output to our model.

    References

    • 1. H. Wen, J. Shi, Y. Zhang, K.-H. Lu and Z. Liu, Neural encoding and decoding with deep learning for dynamic natural vision, arXiv:1608.03425. Google Scholar
    • 2. H. Lee, C. Ekanadham and A. Y. Ng, Sparse deep belief net model for visual area V2, in Adv. Neural Inf. Process. Syst. 20 (2008) 873–880. Google Scholar
    • 3. D. Tran, L. Bourdev, R. Fergus, L. Torresani and M. Paluri. Learning spatiotemporal features with 3D convolutional networks, in Proc. IEEE Int. Conf. Comput. Vis. 2015, Santiago, Chile, pp. 4489–4497. Google Scholar
    • 4. J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky and E. P. Simoncelli, Spatio-temporal correlations and visual signalling in a complete neuronal population, Nature 454 (2008) 995–999. Medline, Web of ScienceGoogle Scholar
    • 5. N. Burkitt, A review of the integrate-and-fire neuron model: I. homogeneous synaptic input, Biol. Cybern. 95(1) (2006) 1–19. Medline, Web of ScienceGoogle Scholar
    • 6. E. J. Chichilnisky, A simple white noise analysis of neuronal light responses, Network: Comput. Neural Syst. 12 (2001) 199–213. Medline, Web of ScienceGoogle Scholar
    • 7. S. A. Baccus and M. Meister, Fast and slow contrast adaptation in retinal circuitry, Neuron 36(5) (2002) 909–919. Medline, Web of ScienceGoogle Scholar
    • 8. J. W. Pillow, L. Paninski, V. J. Uzzell, E. P. Simoncelli and E. Chichilnisky, Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model, J. Neurosci. 25(47) (2005) 11003–11013. Medline, Web of ScienceGoogle Scholar
    • 9. N. A. Lesica and G. B. Stanley, Encoding of natural scene movies by tonic and burst spikes in the lateral geniculate nucleus, J. Neurosci. 24(47) (2004) 10731–10740. Medline, Web of ScienceGoogle Scholar
    • 10. J. Keat, P. Reinagel, R. C. Reid and M. Meister, Predicting every spike: A model for the responses of visual neurons, Neuron 30(3) (2001) 803–817. Medline, Web of ScienceGoogle Scholar
    • 11. S. V. David and J. L. Gallant, Predicting neuronal responses during natural vision, Netw. Comput. Neural Syst. 16(2–3) (2005) 239–260. Medline, Web of ScienceGoogle Scholar
    • 12. K. A. Zaghloul, K. Boahen and J. B. Demb, Contrast adaptation in subthreshold and spiking responses of mammalian y-type retinal ganglion cells, J. Neurosci. 25(4) (2005) 860–868. Medline, Web of ScienceGoogle Scholar
    • 13. L. Mcintosh, N. Maheswaranathan, A. Nayebi, S. Ganguli and S. Baccus, Deep learning models of the retinal response to natural scenes, in Adv. Neural Inf. Process. Syst. Vol. 29 (Barcelona, Spain, 2016), pp. 1369–1377. Google Scholar
    • 14. R. Crespo-Cano, A. Martínez-Álvarez, A. Díaz-Tahoces, S. Cuenca-Asensi, J. M. Ferrández and E. Fernández, On the automatic tuning of a retina model by using a multi-objective optimization, in Artificial Computation in Biology and Medicine (Elche, Spain, 2015), pp. 108–118. Google Scholar
    • 15. D. Turcsany, A. Bargiela and T. Maul, Modelling retinal feature detection with deep belief networks in a simulated environment, in Proc. of the European Council for Modelling and Simulation, (Brescia, Italy, 2014), pp. 364–370. Google Scholar
    • 16. K. He, X. Zhang, S. Ren and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imageNet classification, in Proc. IEEE Int. Conf. Comput. Vis. (Santiago, Chile, 2015), pp. 1026–1034. Google Scholar
    • 17. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, J. Mach. Learn. Res. 15 (2014) 1929–1958. Web of ScienceGoogle Scholar
    • 18. S. Shi, Q. Wang, P. Xu and X. Chu, Benchmarking State-of-the-Art deep learning software tools, in 7th Int. Conf. on Cloud Computing and Big Data (Macau, China, 2016), pp. 99–104. Google Scholar
    • 19. A. Krizhevsky, I. Sutskever and E. H. Geoffrey, ImageNet classification with deep convolutional neural networks, in Adv. Neural Inf. Process. Syst. 25 (Lake Tahoe, Nevada, 2012) 1097–1105. Google Scholar
    • 20. J. I. Glaser, R. H. Chowdhury, M. G. Perich, L. E. Miller and K. P. Kording, Machine learning for neural decoding, arXiv:1708.00909. Google Scholar
    • 21. Y. Lecun, Y. Bengio and G. Hinton, Deep learning, Nature 521(7553) (2015) 436–444. Medline, Web of ScienceGoogle Scholar
    • 22. K. Fukushima, Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position, Biol. Cybern. 34(4) (1980) 193–202. Web of ScienceGoogle Scholar
    • 23. E. Batty, J. Merel, N. Brackbill, A. Heitman, A. Sher, A. Litke, E. J. Chichilnisky and L. Paninski, Multilayer recurrent network models of primate retinal ganglion cell responses, in Proc. Int. Conf. on Learning Representations (Toulon, France, 2017), pp. 1–12. Google Scholar
    • 24. A. Díaz-Tahoces, A. Martínez-Álvarez, A. Moll, L. Humphreys, J. Bolea and E. Fernández, Towards the reconstruction of moving images by populations of retinal ganglion cells, in Artificial Computation in Biology and Medicine (Elche, Spain, 2015), pp. 220–227. Google Scholar
    • 25. E. Fernández, J. M. Ferrández, J. Ammermuller and R. Normann. Population coding in spike trains of simultaneously recorded retinal ganglion cells, Brain Res. 887(1) (2000) 222–229. Medline, Web of ScienceGoogle Scholar
    • 26. NEural SOrter a tool for offline electrophysiological recording analysis, http://soruceforge.net/projects/neuralsorter. Google Scholar
    • 27. P. Martnez-Caada, C. Morillas, B. Pino, E. Ros and F. Pelayo, A computational framework for realistic retina modeling, Int. J. Neur. Syst. 26(7) (2016) 16500301. Web of ScienceGoogle Scholar
    • 28. L. F. Abbott and P. Dayan, Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (The MIT Press, Cambridge, MA, 2005). Google Scholar
    • 29. Y. LeCun, L. Bottou, G. B. Orr and K.-R. Müller, Efficient backprop, in Neural Networks: Tricks of the Trade, eds. G. B. Orr and K.-R. Müller (Verlag Berlin Heidelberg, 1998), pp. 9–50. Google Scholar
    • 30. A. Y. Ng, Feature selection, L1 vs. L2 regularization, and rotational invariance, in Proc. of the Twenty-first International Conf. on Machine Learning (Banff, Alberta, Canada, 2004), pp. 78–86. Google Scholar
    • 31. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. Corrado, A. Davis, J. Dean, S. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, W. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu and X. Zheng, TensorFlow: large-scale machine learning on heterogeneous distributed systems, in Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Learning (2015), https://www.tensorflow.org/about/bib. Google Scholar
    • 32. F. Chollet, Keras (2015), https://github.com/fchollet/keras (accessed March, 2017). Google Scholar
    • 33. FloydHub, https://www.floydhub.com (accessed March, 2017). Google Scholar
    • 34. A. S. Benjamin, H. L. Fernandes, T. Tomlinson, P. Ramkumar, C. VerSteeg, R. Chowdhury, L. Miller and K. P. Konrad, Modern machine learning outperforms GLMs at predicting spikes, bioRxiv 111450 (2017), doi: https://doi.org/10.1101/111450. Google Scholar
    • 35. K. Hornik, M. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks 2(5) (1989) 359–366. Web of ScienceGoogle Scholar
    • 36. H. Sakoe and S. Chiba, Dynamic programming algorithm optimization for spoken word recognition, in IEEE Trans. on Acoustics, Speech, and Signal Processing 26(1) (1978) 43–49. Google Scholar
    • 37. S. Salvador and P. Chan, Toward accurate dynamic time warping in linear time and space, Intelligent Data Analysis 11(5) (2007) 561–580. Web of ScienceGoogle Scholar
    • 38. Slaypni, A Python implementation of FastDTW (2015), https://github.com/slaypni/fastdtw (accessed July, 2017). Google Scholar
    • 39. B. Naecker, N. Maheswaranathan, S. Ganguli and S. Baccus, Pyret: A Python package for analysis of neurophysiology data, J. Open Source Software 2(9) (2017), Miscellaneous Resource. MedlineGoogle Scholar
    • 40. M. Fiorani, J. C. B. Azzi, J. G. M. Soares and R. Gattass, Automatic mapping of visual cortex receptive fields: A fast and precise algorithm, J. Neurosci. Methods 221 (2014) 112–126. Medline, Web of ScienceGoogle Scholar
    • 41. I. Tsuda, Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems, Behav. Brain Sci. 24(5) (2001) 793–810. Medline, Web of ScienceGoogle Scholar
    • 42. M. T. Colonnese, J. Shen and Y. Murata, Uncorrelated neural firing in mouse visual cortex during spontaneous retinal waves, Front. Cell. Neurosci. 11 (2017) 289. Medline, Web of ScienceGoogle Scholar
    • 43. M. D. McDonnell and L. W. Ward, The benefits of noise in neural systems: Bridging theory and experiment, Nat. Rev. Neurosci. 12 (2011) 415–426. Medline, Web of ScienceGoogle Scholar
    • 44. O. Schoppe, N. S. Harper, B. D. B. Willmore, A. J. King and J. W. H. Shnupp, Measuring the performance of neural models, Front. Computat. Neurosci. 10 (2016), https://doi.org/10.3389/fncom.2016.00010. Medline, Web of ScienceGoogle Scholar
    • 45. J. A. Cortés, C. S. Sánchez, Á. G. Pizá, A. L. Albarracín, F. D. Farfán, C. J. Felice and E. Fernández, Time-frequency analysis of neuronal populations with instantaneous resolution based on noise-assisted multivariate empirical mode decomposition, J. Neurosci. Methods 267 (2016) 35–44. Medline, Web of ScienceGoogle Scholar
    • 46. N. Parthasarathy, E. Batty, W. Falcon, T. Rutten, M. Rajpal, E. J. Chichilnisky and L. Paninski, Neural networks for efficient Bayesian decoding of natural images from retinal neurons, in Adv. Neural Inf. Process. Syst. (California, United States, 2017), pp. 6437–6448. Google Scholar
    • 47. J. Bergstra and Y. Bengio, Random search for hyper-parameter optimization, J. Mach. Learn. Res. 13(1) (2012) 281–810. Google Scholar
    • 48. R. Barbieri, C. M. Quirk, L. M. Frank, M. A. Wilson and E. N. Brown, Construction and analysis of non-Poisson stimulus-response models of neural spiking activity, J. Neurosci. Methods 105(1) (2001) 25–37. Medline, Web of ScienceGoogle Scholar
    • 49. S. Han, H. Mao, and W. J. Dally. Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding, in Proc. Int. Conf. on Learning Representations (San Juan, Puerto Rico, 2016), pp. 1–14. Google Scholar
    • 50. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems Vol. 27 (Montreal, Canada, 2014), pp. 2672–2680. Google Scholar
    • 51. S. Sabour, N. Frosst and G. E. Hinton, Dynamic routing between capsules, in Adv. Neural Inf. Process. Syst. (California, United States, 2017), pp. 1369–1377. Google Scholar
    • 52. S. Ghosh-dastidar and H. Adeli, Spiking neural networks, Int. J. Neural Syst. 19(4) (2009) 295–308. Link, Web of ScienceGoogle Scholar
    • 53. A. Wohrer and P. Kornprobst, Virtual retina: A biological retina model and simulator, with contrast gain control, J. computat. Neurosci. 26(2) (2009) 219–249. Medline, Web of ScienceGoogle Scholar
    • 54. P. Cessac, S. Kornprobst, H. Kraria, D. Nasser, G. Pamplona, G. Portelli and T. Viville. PRANAS: A new platform for retinal analysis and simulation, Frontiers in Neuroinf. 11(49) (2017), https://doi.org/10.3389/fninf.2017.00049. Medline, Web of ScienceGoogle Scholar
    • 55. F. Chollet, Deep Learning with Python, 1st edn. (Manning Publications Co., Greenwich, CT, USA, 2017). Google Scholar
    • 56. Y. Z. Lin, Z. H. Nie and H. W. Ma, Structural damage detection with automatic Feature-extraction through deep learning, Comput.-Aided Civil Infrastruct. Eng. 32(12) (2017) 1025–1046. Web of ScienceGoogle Scholar
    • 57. A. Ortiz-Garcia, J. Munilla, J. M. Gorriz and J. Ramirez, Ensembles of deep learning architectures for the early diagnosis of Alzheimers disease, Int. J. Neural Syst. 26(7) (2016) 1650025 (23 pages). Web of ScienceGoogle Scholar
    • 58. F. C. Morabito, M. Campolo, N. Mammone, M. Versaci, S. Franceschetti, F. Tagliavini, V. Sofia, D. Fatuzzo, A. Gambardella, A. Labate, L. Mumoli, G. G. Tripodi, S. Gasparini, V. Cianci, C. Sueri, E. Ferlazzo and U. Aguglia, Deep learning representation from electroencephalography of early-stage Creutzfeld-Jakob disease and features for differentiation from rapidly progressive dementia, Int. J. Neural Syst. 27(2) (2017) 1650039 (15 pages). Link, Web of ScienceGoogle Scholar
    • 59. M. Koziarski and B. Cyganek, Image recognition with deep neural networks in presence of noise dealing with and taking advantage of distortions, Integrat. Comput. Aided Eng. 24(4) (2017) 337–350. Web of ScienceGoogle Scholar
    • 60. Y. Ma, N. Suda, Y. Cao, S. Vrudhula and J. Seo, ALAMO: FPGA acceleration of deep learning algorithms with a modularized RTL compiler, Integration, 62 (2018) 14–23. Web of ScienceGoogle Scholar
    • 61. F. Ortega-Zamorano, J. M. Jerez, I. Gmez and L. Franco, Layer multiplexing FPGA implementation for deep back-propagation learning, Integrat. Comput.-Aided Eng. 24(2) (2017) 171–185. Web of ScienceGoogle Scholar
    • 62. R. U. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan and H. Adeli, Deep convolutional neural network for the automated detection of seizure using EEG signals, Comput. Biolog. Med. 92 (2018) 270–278. Web of ScienceGoogle Scholar
    • 63. S. Ioffe and C. Szegedy, Learning spatiotemporal features with 3D convolutional networks, in Proc. of the 32nd Int. Conf. on Machine Learning, Vol. 37 (Lille, France, 2015), pp. 448–456. Google Scholar
    Remember to check out the Most Cited Articles!

    Check out our titles in neural networks today!