World Scientific
  • Search
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
Our website is made possible by displaying certain online content using javascript.
In order to view the full content, please disable your ad blocker or whitelist our website www.worldscientific.com.

System Upgrade on Tue, Oct 25th, 2022 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

Using Convolution and Deep Learning in Gomoku Game Artificial Intelligence

    Gomoku is an ancient board game. The traditional approach to solving the Gomoku game is to apply tree search on a Gomoku game tree. Although the rules of Gomoku are straightforward, the game tree complexity is enormous. Unlike many other board games such as chess and Shogun, the Gomoku board state is more intuitive. That is to say, analyzing the visual patterns on a Gomoku game board is fundamental to play this game. In this paper, we designed a deep convolutional neural network model to help the machine learn from the training data (collected from human players). Based on this original neural network model, we made some changes and get two variant neural networks. We compared the performance of the original neural network with its variants in our experiments. Our original neural network model got 69% accuracy on the training data and 38% accuracy on the testing data. Because the decision made by the neural network is intuitive, we also designed a hard-coded convolution-based Gomoku evaluation function to assist the neural network in making decisions. This hybrid Gomoku artificial intelligence (AI) further improved the performance of a pure neural network-based Gomoku AI.

    References

    • 1. L. V. Allis, Searching for Solutions in Games and Artificial Intelligence (Ponsen & Looijen, Wageningen, 1994), pp. 21–152. ISBN 90-9007488-0. Google Scholar
    • 2. C. E. Shannon, Programming a computer for playing chess, Computer Chess Compendium (1988) 2–13. http://doi.org/10.1007/978-1-4757-1968-0_1. Google Scholar
    • 3. J. Tromp and G. Farnebäck, Combinatorics of Go, Computers and Games, Lecture Notes in Computer Science (2007), pp. 84–99. http://doi.org/10.1007/978-3-540-75538-8_8. Google Scholar
    • 4. A. Krizhevsky, I. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems 25, eds. F. PereiraC. J. C. BurgesL. BottouK. Q. Weinberger (Curran Associates, Inc., 2012), pp. 1097–1105. Google Scholar
    • 5. D. Silver et al., Mastering the game of Go with deep neural networks and tree search, Nature 529(7587) (2016) 484–489. http://doi.org/10.1038/nature16961. ISIGoogle Scholar
    • 6. D. Silver et al., Mastering the game of Go without human knowledge, Nature 550(7676) (2017) 354–359. http://doi.org/10.1038/nature24270. ISIGoogle Scholar
    • 7. V. Nair and G. E. Hinton, Rectified linear units improve restricted boltzmann machines, in Proceedings of the 27th International Conference on Machine Learning (ICML 10) (2010), pp. 807–814. Google Scholar
    • 8. G. Chaslot et al., Monte-Carlo tree search: A new framework for game AI, in Proceedings of the Fourth Artificial Intelligence and Interactive Digital Entertainment International Conference (AIIDE) (2018), pp. 216–217. Google Scholar
    • 9. Y. Lecun, P. Haffner, L. Bottou and Y. Bengio, Object recognition with gradient-based learning, Shape, Contour and Grouping in Computer Vision, Lecture Notes in Computer Science (1999), pp. 319–345. http://doi.org/10.1007/3-540-46805-6_19. Google Scholar
    • 10. K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, Nevada, USA, June 26–July 1, 2016). http://doi.org/10.1109/cvpr.2016.90. Google Scholar
    • 11. K. He, X. Zhang, S. Ren and J. Sun, Identity mappings in deep residual networks, in Computer Vision – ECCV 2016, Lecture Notes in Computer Science (14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016), pp. 630–645. http://doi.org/10.1007/978-3-319-46493-0_38. Google Scholar
    • 12. S. Mannor, D. Peleg and R. Rubinstein, The cross entropy method for classification, in Proceedings of the 22nd International Conference on Machine Learning (ICML 05) (Bonn, Germany, August 7–11, 2005), pp. 561–568. http://doi.org/10.1145/1102351.1102422. Google Scholar
    • 13. D. P. Kingma and J. Ba, Adam: A method for stochastic optimization (2014), arXiv. Retrieved from https://arxiv.org/abs/1412.6980. Google Scholar
    • 14. Y. Lecun, Y. Bengio and G. Hinton, Deep learning, Nature 521(7553) (2015) 436–444. http://doi.org/10.1038/nature14539. ISIGoogle Scholar
    • 15. S. W. Smith, Digital signal processors in The Scientist and Engineer’s Guide to Digital Signal Processing (California Technical Pub., San Diego, 1997), Chapter 28, pp. 503–534. Google Scholar
    • 16. N. M. Nasrabadi, Pattern recognition and machine learning, Journal of Electronic Imaging 16(4) (2007) 049901. http://doi.org/10.1117/1.2819119. Google Scholar
    • 17. H. He and E. A. Garcia, Learning from imbalanced data, IEEE Transactions on Knowledge and Data Engineering 21(9) (2009) 1263–1284. http://doi.org/10.1109/tkde.2008.239. ISIGoogle Scholar