Chess Programming Theory
Negamax Search
This is the fundamental structure around which the rest of ColChess'
tree searching algorithm is based. To put it simply, Negamax tree searching
implements the idea that "The worse your opponent's best reply is, the better
your move."
Implementing this is surprisingly easy. It uses the fact that chess is a
symmetric game, and that therefore the analysis function must give symmetric
scoring. That is to say that at any point, the score for white is exactly
minus the score for black, or equivalently the sum of the two scores always
equals zero.
This is quite straightforward to understand. If white is winning
by one pawn, then clearly black is losing by the same amount. This
principal can be extended to positional advantages, i.e. if white has
doubled rooks on one file, then white has a bonus score, whereas black's position
is weaker by the same amount because of this.
Negamax simply implements the following rough search steps;
Loop through all moves
Play move
move_score = - Opponent's_best_move_score
if ( move_score > best_move_score ) then ( best_move = move )
Undo Move
End of Loop
As you can see, this is a recursive algorithm. To calculate the score for
the first player's move, you must calculate the best of his opponent's
moves. For each of his opponent's moves, you must calculate the best score
for the replies to those moves, and so on. Clearly we need to define some
sort of cutoff depth, or else this will continue for ever. To do this, we
implement the following slightly-refined version of the above code;
(Set depth initially to required value)
SEARCHING_FUNCTION {
Decrease depth by 1
Loop through all moves
Play move
if ( depth = 0 ) move_score = static_position_score
else move_score = - Opponent's_best_move_score
if ( move_score > best_move_score ) then ( best_move = move )
Undo Move
End of Loop
Return best_move_score
} END
Here, the static position score routine simply returns a score for the
current position based on various considerations. We can imagine a simple
one which simply adds up the points values for all the current side's
pieces, and subtracts the points values for the opponent's pieces. Clearly
this will be very fast, but will play extremely bad chess because two
positions can be very much better or worse for one player, but with the same
material scores. As a simple example, imagine the endgame position where
white has a king on h1 and a pawn on h2, black has a king on a3 and a pawn
on a2. Clearly this is won for black, regardless of who is to play next.
However, they both have exactly the same material.
Analysis Function
ColChess uses a complicated set of analysis functions depending on what
material is on the board. I have several separate endgame analysis
functions for certain types of endgame such as kings and pawns only. The
main analysis routine, however, implements the following simple ideas;
Firstly, loop through the board, and make up an array of attack and defence
values for each square. This is a list of how many times each square is
attacked by an opponent piece or defended by a friendly one. Also keep a
list of how many pawns are on each file for future reference.
Next, loop through the board again, but this time do a much deeper analysis.
Award points for how much empty squares are attacked and defended. Analyse
each piece on the board separately by considering factors which
are important for that particular type of piece. For example, reward
knights for occupying squares near the centre of the board, and reward kings
for staying out of trouble in the corners. Also add on a material score for
each piece you own on the scale of a pawn=100 points. In ColChess I use
variable piece values depending on the stage of the game, i.e. bishops
become more valuable in open endgames.
Finally, add on a load of other important general factors, for example
reward pawn chains, bishop pairs, connected rooks and control of the board.
Penalise doubled or tripled pawns and also blocked or 'hung' pieces (that
is, pieces which are attacked and not defended). I do checkmate testing
elsewhere, so that is not included in the static analysis function.
Specialised endgame functions have different weightings, for example passed
pawns become much more dangerous when there are no opposing pieces around to
stop them from promoting.
Iterative Deepening
Often in games the computer player will be forced to search for a certain
time, rather than to a fixed depth. Iterative deepening allows it to do
this, but has many other important bonuses connected with the
transposition table. The theory is simply
this;
Start at a shallow depth, say 2 ply. That means that you search each of
your moves, and for each of them you find your opponent's best reply and
immediately return that static score. Once you have searched the tree to
depth 2 ply, then increase the depth to 3 and search again. Continue doing
this until you've searched to the minimum required depth and either (a)
you've run out of time, or (b) you have also reached the maximum allowed
depth.
The first obvious advantage of this method is that you will always have some
result to show from your search, and you know that it won't be a total
mistake, at least to a few ply. Imagine a program which guesses the search
depth to work to based on the time allocation and the board complexity, and
then searches the first thirty of 31 available moves at a particular game
stage. Then the program runs out of time and is forced to return the best
move found so far. These 30 moves might all be complete blunders,
immediately losing a piece or worse. However, the 31st move might be the
only one to save his pieces, and win the game. This naive program never
even considered it!
Using iterative deepening, you know that you have a good foundation to build
on. A 2 ply search takes no time at all, usually a tiny fraction of a
second on fast computers. Therefore you know that when you search to depth
3 ply, you already have a good idea of which move might end up being the
best. Furthermore, you can use all of the information derived in previous
shallower searches to speed up vastly the subsequent deeper searches.
Counter-intuitively, searching all the shallower depths first normally
reduces the time for the overall search compared to just starting at
a deep level initially.
This can be seen easily with the following example. Imagine you are
planning to search a position to depth 10, say, and you start off iterative
deepening at 2 ply, then 3 ply etc... until you reach 6 ply where you spot
that all but one of the possible moves loses to checkmate! Clearly there is
no point fully searching this one remaining move to 10 ply as you have no choice
but to play it! You simply quit the search there and return the only safe move.
Of course this move may also lose to checkmate in more than 6 ply, but
that's no worse than any of the other move options and it makes it less
likely that your opponent has spotted the win.
Alpha-Beta Pruning
Alpha-beta pruning is a technique for enormously reducing the size of your
game tree. Currently using the negamax algorithm we are searching every
reply to every move in the tree. In the average chess position there are
about 30 legal moves, and let's say for sake of argument that our program
analyses 50,000 moves per second. Now, let's see how deep we can
search.
Depth (ply) | Number of Positions | Time to Search |
2 | 900 | 0.018 s |
3 | 27,000 | 0.54 s |
4 | 810,000 | 16.2 s |
5 | 24,300,000 | 8 minutes |
6 | 729,000,000 | 4 hours |
7 | 21,870,000,000 | 5 days |
You can see that the search tree quickly gets extremely large, and that's
assuming a rather fast analysis function. Searching to 6 ply in the
middlegame is vital for a good chess program playing blitz. The total time
limit might be 5 minutes, perhaps giving 15-20 seconds per move. Using this
method we can only just search to ply 4. What we need to do is enormously
reduce the tree size. This is where alpha-beta pruning comes in.
Alpha-beta pruning is just a complicated-sounding name for "Don't check
moves which cannot possibly have an effect on the outcome of your search."
The theory is essentially simple, but takes a bit of thought before you get
used to it. Imagine the following scenario;
Your chess program is busy searching all its moves at the top level. So far
it has searched the first six moves, and the best score has been 15 points.
It starts searching the seventh move and considers its opponent's replies.
Remember that your score is minus your opponent's best score. The program steps
through all of its opponent's replies to this one move, getting the
following scores;
-20,-22,-15,-16,-11,-18,-20,-30,-70,-75
Now the best amongst these is -11, which gives your program a score for its
move of -( -11 ) which is 11. This is worse than the previous best move at
this level so this particular move is ignored. But we have wasted a lot of
time here searching the 6th to 10th of the opponent's replies. As soon as
one of the opponent's replies scored -11, we knew that this move of ours
could not possibly beat the best score so far of 15. Whatever the rest of
the replies scored, the best reply was guaranteed to score at least -11,
meaning that our move would score at most 11 and therefore that we would
disregard it. So what we should have done here was to quit the search as soon
as the score of -11 was discovered, and stop wasting our time.
This is the principle of alpha-beta pruning. Its implication to negamax
search goes something like this,;
initially alpha = -INFINITY, beta=INFINITY
search(position,side,depth,alpha,beta) {
best_score = -INFINITY
for each move {
do_move(position, move)
if ( depth is 0 ) move_score = static_score(position, side)
else move_score = - search(position, opponent side, depth-1, -beta, -alpha)
undo_move(position,move)
if ( move_score > best_score ) best_score = move_score
if ( best_score > alpha ) alpha = best_score
if ( alpha >= beta ) return alpha
}
return best_score
}
Apologies for moving into pseudo-c but this algorithm requires that the
correct variables are passed to the next level and it would be impossible to
show it in any other way. I hope that all the symbols are obvious.
Whenever we are searching we check to see if the move we have is greater
than alpha. If it is then we replace alpha by the new score. This way
alpha keeps track of the best score so far.
If a move scores less than alpha then we're not interested in it because
it's not good enough to worsen the score of the move to which it is a reply.
If the score is between alpha and beta then it will reduce the score of the
opponent's previous move, but not enough to make the same previous move bad.
If a reply move scores equal to or greater than beta then this move is so good
that the opponent's last move becomes bad, and the opponent would have never
played it and let this situation arise, so we need search no more. We return
this value of the best score immediately. This is called a fail-high cutoff.
Alpha-beta search effectively reduces the branching factor at each node
from 30-40 to about 7 or 8 provided that you have a reasonably good initial
move ordering routine. It is clearly advantageous to have the best moves
near the top of the list so that all the other moves are much more likely to
cause cutoffs early. Even basic maths can tell you that such a
reduction in the branching factor will almost double the depth to which you
can search in a fixed time. Alpha-beta search is used by all good chess
programs. It doesn't change the outcome on a fixed depth search one bit as
all the branches it prunes out are irrelevant and wouldn't have altered the
move score. Moreover, with twice the search depth you can get vastly better
moves and play much stronger chess.
Principal Variation Search
This is a further refinement on negamax search with alpha-beta cutoffs. The
idea behind it is rather simple, but it is a little tricky to implement
because of a few annoying subtleties. Basically, one expects to have a good
move ordering function which takes the list of all legal moves at each
position, and naively puts the moves which are likely to prove to be the best
near the top. One usually considers captures first, followed by pawn pushes
and checks, moves towards the centre, and then the rest. This tends to put
the best move at or near the top of the list in the majority of cases.
So assuming that your preliminary move ordering is good, it is unnecessary
to search the rest of the moves quite so thoroughly as the first. You don't
need to know what they score exactly, you just need to make sure that they
aren't better than the best move so far. To this end, we just search the
first move properly, and then for each subsequent move we do a search with a
narrow window of 1 point using the following code;
move_score = - search(position, opponent side, depth-1, -alpha-1, -alpha)
instead of ...
move_score = - search(position, opponent side, depth-1, -beta, -alpha)
If we were correct in our assumption, and the best move really was at the
front of the list, then this search will return an approximate value much
faster than doing a full search. However, if we were wrong then it will
exit, hopefully quickly, with a fail high beta cutoff, indicating that this
move will actually improve alpha, and that it therefore needs to be searched
properly. (Remember that 'improving alpha' means that this move is going to
beat the best score so far)
The expectation is that the gain in time caused by carrying out so many
searches with narrow search bounds will vastly offset the penalty of
occasionally having to search again after a cutoff. As with all alpha-beta
methods, the better the preliminary move ordering, the more efficiently the
pruning works, and the faster your program runs.
Transposition Table
The transposition table is a method of storing work which we have already
done so that we don't have to do it again. If we search to depth 6, finding
all but three of the moves we could play lead to loss by checkmate, then we
should store this so that when we come to search to depth 7, we needn't
bother searching these positions at all.
The transposition table in ColChess works by creating a 'hash-value' for the
current board position based on a random table of large integers. The
expectation is that each board position has a separate hash-code, or
identification number if you like. In practice it is possible, though
extremely unlikely, for two boards to have the same code so I generate
two hash keys in different ways so that the probability of this becomes
miniscule.
Before a search is done, ColChess checks in the hashtable to see if the
current board position has been searched before. If it has been searched
before to at least the depth to which it is currently being searched then
there's no need to search it again. If it hasn't been searched sufficiently
before, then a search proceeds as normal.
As soon as a search is completed then the hash table is updated with the new
results, storing the board hash key (or code) and the score which the search
returned. I also store the depth to which this position was searched, and
the 'type' of the score. Lots of the time the position will be scored
exactly, but due to alpha-beta pruning the score returned is often only a
lower or an upper bound. Whereas these values aren't so useful as having an
exact score, it is still a good thing to store them because if, for example,
the upper bound on the score is worse than alpha then you needn't bother
searching this position any deeper. Even though you don't know what it would
score exactly, you know that it wouldn't be good enough to make a difference.
Killer Moves
Move ordering is all-important in chess programming. The nearer your best
move is to the top of the move list, the quicker all the other moves will
fail-high and the quicker the whole process will be.
Killer move heuristic is a good way of ensuring the moves which repeatedly
perform well at a certain ply are searched before the rest. Often you will
find that your opponent has one good move that you must stop him or her from
playing. This move will become a killer move because most of the moves you
can play lead to your opponent playing this one good move, and winning a
piece or more.
It is therefore a good idea to keep track of the moves which repeatedly
cause beta cutoffs at each level, and place them nearer the top of the move
list, depending on how often they are played. If these killer moves are
searched first then the pruning will be much better as the computer won't
have to waste time searching lots of other (bad) moves first.
In ColChess I keep track of the top two killer moves, and the scores
associated with them. Whenever a better move comes along then I displace
the lower killer move, and place the better move in either first or second
place depending on how much it scores.
History Heuristic
Ths history heuristic is similar to the killer heuristic, but independent of
the current depth. I keep a track of all the best moves which have been played,
and then I keep a table size 64*64 indexed by 'from' square and 'to' square.
All these values start off at 0 and are increased by one for each time the
corresponding move is the best. Therefore the good moves score much more
highly than the bad moves, and I can add the value from this table onto the
move's preliminary score before move ordering, creating a much more accurate
ordering sequence.
I have also recently implemented a version of this which stores a different
history table for each ply, but I haven't really noticed any great
improvements from using this algorithm so it is only optional.
Quiescence Search
The problem with abruptly stopping a search at a fixed depth is something
called the 'horizon effect'. It might be that you have just captured an
opponent's pawn at depth 0, then you return that score being justifiably proud.
However, if you had searched another ply deeper you would have seen that the
opponent could recapture your queen!
To get round this problem, ColChess (like all good chess programs)
implements a method called 'quiescence searching'. As the name suggests,
this involves searching past the terminal search nodes (depth of 0) and
testing all the non-quiescent or 'violent' moves until the situation becomes
calm. This enables programs to detect long capture sequences and calculate
whether or not they are worth initiating.
There are several problems with this method which need to be overcome.
Firstly, it could indeed cause an explosion in the size of the game tree if
used unwisely. ColChess has three levels of quiescence searching, all with
different limits to overcome this problem;
- Full width quiescence search
- Reduced width quiescence search
- Capture search
Full width search is not much different to the original search at
depth>0, generating all the possible available moves and testing to see
which one is the best. Clearly this is slow so it is used only in
exceptional circumstances. By default this is not performed at all, but
with the use of quiescence width extensions it
occasionally comes into play. Generally it is only used in extremely
dangerous situations where the side to move is in check and may well lose
out because of this.
Reduced width search is carried out at only a shallow quiescence
depth and the exact length of this depends on how dangerous the position is.
Generally it is only 0 or 1 ply, but occasionally 2 or 3 ply beneath the
terminal leaf nodes (depth = 0). In this search regime I consider only a
subset of all the available moves, namely those moves which have potential
to drastically change the current static score. Those are captures, checks
and pawn advances. If ColChess comes to do a reduced-width quiescence search
at a particular node and it finds that it is in check then it does a full-width
search intstead.
A capture search is performed after the maximum specified
reduced-width search. Just as the name suggests, this involves testing
only the capture moves at every node. Because this is much faster, and very
important to do, I let this search continue until each branch is quiescent
and there are no more captures possible. That is to say this is an infinite depth
search, though of course there are only a finite number of pieces to be captured!
Of course there is another very important problem with quiecence searching.
It is often more advisable for a player not to initiate
a capture line because that player will lose out in the long run by doing
so. In this case, at every node I give the computer the option of 'not
moving' and just accepting the static move evaluation at that point. If
this turns out to be better than any of the capturing move options then it
will return that value. Often the best move is a quiescent one
and forcing the computer to make a violent move might severely worsen its
position.
The only exception to this is of course when I do a full-width quiescence
search. Because this involves considering all of the possible moves, I do
not allow the computer the possibility of accepting the current static score.
This means that I can catch slightly deeper checkmates if they are forced
because the quiescence search considers extremely narrow, but dangerous
lines.
Quiescence Width Extensions
I have already mentioned this method in a small amount of detail, but it is
worth clarifying. This is simply a way of improving the accuracy of the
quiescence search when you think that it is necessary. Clearly there is
little point in performing a full width search in all conditions, but if
one of the kings is under heavy attack, it might well be worth it.
I keep track of a variable at each ply which tallies up the total number of
extensions so far. This is set to zero initially. Any form of
check at any point in the move analysis sets it to one. If at any point one
of the sides has only one legal move in any position then the extension
value is increased (to a maximum of 2 ply) and similarly if a side is in check
and avoids it by moving its king. I always test positions that are in the
principal variation with a depth of 3 ply.
This algorithm, amongst others, helped ColChess to make two large jumps
in test score results between versions 5.4, 5.5 and 5.6.
If you have any questions about Chess Programming Theory, or about ColChess
itself, please email me at this address.
Back to ColChess Homepage
This site is published by cmf@ast.cam.ac.uk
Last Modified 02/03/00