BlackJack is an ideal game for a computer to learn. It is relatively straightforward in its gameplay and the strategy can be learned by a computer quite well through trial and error. This is an example of a strategy that can be used in many different types of games.
This implementation will use reinforcement learning in order to 'learn' an appropriate strategy for the game. The 'Player'(Jack) will be provided the same information that a human player would have access to and is only given feedback from the results of the game itself.
The implementation begins by randomly making moves and then slowly over time it builds confidence in it's internally developed strategy. Eventually, it will stop playing randomly and will only make strategic decisions.
Card Deck and 'Hand' Class to assist with the gameplay and scoring.
Some aspects of the game have been simplified but the core concept remains the same. For instance, there is no possibility to count cards and the dealer will always try to hit if he is losing.
import numpy as np
face=["2","3","4","5","6","7","8","9","0","J","Q","K","A"]
suit=["H","S","C","D"]
deck=[]
for i in range(13):
for z in range(4):
deck.append(face[i]+suit[z])
deck=['2H','2S','2C','2D','3H','3S','3C','3D','4H','4S','4C','4D','5H', '5S','5C','5D','6H','6S','6C','6D',
'7H', '7S','7C','7D','8H','8S','8C','8D','9H','9S','9C','9D','0H','0S','0C','0D','JH','JS','JC','JD','QH','QS', 'QC',
'QD', 'KH', 'KS', 'KC', 'KD', 'AH','AS', 'AC', 'AD']
class hand():
#inputs an array of cards from "deck" and scores the current hand.
def __init__(self,hand):
import numpy as np
switch={"A":[11,1],"K":[10],"Q":[10],"J":[10],"0":[10],"9":[9],"8":[8],"7":[7],"6":[6],"5":[5],"4":[4],"3":[3],"2":[2]}
self.cards=hand
self.score=0
self.aces=0
for each in self.cards:
self.score+=switch[each[0]] [0]
if each[0]=="A":
self.aces+=1
while (self.score>21) & (self.aces>=1):
self.score-=10
self.aces-=1
def getscore(self):
return self.score
def getcards(self):
return self.cards
def __str__(self):
return np.array2string(self.cards)
a=hand(np.array(["AC","KS"]))
print(a,a.getscore())
b=hand(np.array(["AC","AS"]))
print(b,b.getscore())
Originally programmed to 'Explore' randomly and continuously update its 'q_space' to learn from rewards.
Given a reward of +2 for a win and a reward of -1 for a bust or loss
class player():
def __init__(self,epsilon=1.0, q_space=None,loading=False):
self.wins=0
self.losses=0
self.gamma=.9
self.epsilon=epsilon
self.cur_score=0
self.hand=None
if loading==False:
self.q_space=np.zeros((22,13,2))
else:
self.q_space=q_space
def playhand(self,cards,dealercard):
self.hand=hand(cards)
self.cur_score=self.hand.getscore()
greed=np.random.choice(["Explore","Exploit"],p=[self.epsilon,1-self.epsilon])
if greed=="Explore":
action=np.random.choice(["Stay","Hit"],p=[.5,.5])
else:
if self.cur_score>21:
index=21
else:
index=self.cur_score
action=np.argmax(self.q_space[index-4,dealercard,:])
if action==0:
action="Stay"
else:
action="Hit"
return action
def play(self,_print=True):
won=False
a=np.random.choice(deck,2)
dc=np.random.randint(2,12)
action=self.playhand(a,dc)
states=[]
states.append([self.cur_score,dc,action,0])
index=0
while action=="Hit":
card=np.random.choice(deck)
a=list(a)
a.append(card)
a=np.array(a)
action=self.playhand(a,dc)
if self.cur_score>21:
states[index][3]=-1
if self.cur_score==21:
states[index][3]=+1
states.append([self.cur_score,dc,action,0])
index+=1
if self.cur_score>21:
states[index][3]=-1
dealer=dc
#Dealer plays
if self.cur_score<22:
while (self.cur_score>=dealer) & (dealer<22):
dealer+=np.random.randint(2,12)
if (dealer<22) & (self.cur_score<dealer):
states[index][3]=-1
elif self.cur_score<22:
states[index][3]=1
won=True
else:
states[index][3]=-1
self.learn(states)
if won==True:
self.wins+=1
if self.wins%3==0:
self.epsilon*=.999
else:
self.losses+=1
result= "Win" if won==True else "Lost"
if _print==True:
print(self.hand,"Score: "+str(self.cur_score), "Dealer: "+str(dealer),result)
def learn(self,states):
#Update the q_space in order to learn from current play
for each in states:
if each[0]>21:
index=21
else:
index=each[0]-4
if each[2]=="Hit":
action=1
else:
action=0
self.q_space[index,each[1],action]=self.q_space[index,each[1],action]*self.gamma+each[3]
Jack=player()
Jack.play()
Jack.play()
From the first attempts you can see that Jack is randomly playing. The second hand was dealth a 21 and decided to hit-ironically still getting 21. Jack will not play with any sense in the first stages of the learning the game. This process of exploration is to learn the rewards of the game and try new things. This process is extremely important to avoid getting stuck in sub-optimal strategies.
Jack=player()
winlist=[]
for q in range(3000):
for i in range(100):
Jack.play(_print=False)
winrate=Jack.wins/(Jack.wins+Jack.losses)
winlist.append(winrate)
x=np.arange(len(winlist))*100
The initial stages of the learning process are very volatile but over time as Jack becomes more comfortable it becomes more consistent in win/loss ratio. Of course, the dealer still has an advantage in this game set up and will win more than 50% of the time regardless of strategy.
This learning curve can be slowed or sped up with simple changes to the learning rate and explore/exploit ratio.
import pandas as pd
import matplotlib.pyplot as plt
z=pd.DataFrame({"Games":x,"Winrate": winlist},columns=["Games","Winrate"])
plt.figure(figsize=(13,7))
plt.plot(z["Games"],z["Winrate"])
plt.xlabel('Games')
plt.ylabel('Win-Rate')
plt.title('Learning Curve')
plt.show()
Most of Jack's strategies end up being fairly monotonic. That is partly why this game is a good set-up for a computer to learn. Concretely, this means that game strategies are deterministic and static over time. The game does not adapt to your strategies choices like some games do.
Looking at the q_space below (Jacks decision making memory):
Jack.q_space[17,2:12,:]
#Decisions for having 21 (Left- stay, Right-hit) Of course, these decisions are fairly easy to make
Decisions for holding 16 is more mixed. The dealer's card needs to be considered before making a decision
While holding 16, Jack ends up deciding to stay only when the dealer is showing a 5.
Jack.q_space[12,2:12,:]
Jack.wins, Jack.losses=0, 0
for i in range(100):
Jack.play()
print("100 Sampled Hands Win rate {}%".format(Jack.wins*100/(Jack.losses+Jack.wins)))