The goal of a decision tree is to investigate the possible
game outcomes given the current state of the game. In order to do this, a tree
of game states is built where the depth of the tree represents how many moves
into the future is currently being investigated. In order to evaluate the
strength of one move branch over another, we return to the general strategy of
gaining irreversible board locations.
Somewhere in the decision tree's
branches, there exists the best possible outcome as well as the worse outcome
and everything in between given any particular move. Since we are interested in
gaining positions that we can't lose, it's the worse possible outcome as the
result of each of immediately possible moves that will receive our attention.
More to the point, if we choose the move that has the best worse possible
outcome each time, i.e. creating irreversible pieces, then we are constantly
making moves that improve our strategic strength.
In order to evaluate
the value of each possible outcome [node] in the tree, one should use a board
location value matrix. The behavior of the decision tree is further increased if
a relative strength (player 1 vs. player 2) is used as an indicator
instead of the board value for the active player alone. Hence, a location value
ratio is a great way to evaluate a tree node's weight:
Location Value
Ratio: The sum of the active player's board values divided by the sum of
the opponent's board values, where the each players values are established using
their relative board location value matrix.
Parsing
It is necessary to parse these decision trees using some
hueristic(s) so that they can be used real time.