We created a computer program that allowed users to play ``No Thanks!", a popular card game that is both simple to learn and capable of involving complex strategies. Within this computer program are computer players, or AI, that were coded from scratch to follow certain strategies. Within these hard coded AI parameter tuning was used to improve the AI's execution of the given strategy. In addition to these hard coded AI, we created a self-learning AI. This AI started with no knowledge of the game or strategy and by repeatedly playing itself came to understand the game and potential strategies. This was done using neural networks of varying sizes, with a temporal difference algorithm used to train the weights within the neural network. While very large computation time was associated with neural networks capable of learning the game well enough for the self-learning AI to rival the best hard coded AI, the self-taught AI was able to learn enough to pose a challenge for human players of average skill. An introduction to game theory and reinforcement learning is also discussed in order to facilitate an understanding of the results.


Pasteur, R. Drew




Games, Game Theory, Machine Learning, Neural Network

Publication Date


Degree Granted

Bachelor of Arts

Document Type

Senior Independent Study Thesis



© Copyright 2017 Robin M. Morillo