Lyman's Ruminations

Personal observations of an ex-math professor, software engineer, abstract games enthusiast, classical music lover and most importantly husband and father of four.

My Photo
Name:
Location: Concord, NH, United States

Saturday, January 08, 2005

Computer Chess or My Post-Christmas Reading

My oldest sister reviews mystery books for her local newspaper in North Carolina as a diversion from being a social work professor. As I know she will eventually read this, I apologize in advance for my attempts at a book review; although, as with all these notes, it is as much personal reflection as review per se.

Two of the books I received for Christmas were Blondie: Playing at the edge of AI by David Fogel (okay, it is about checkers but that made the title too long), and Behind Deep Blue: Building the Computer that Defeated the World Chess Champion by Feng-Hsiung Hsu.

The first book concerned a result of a research experiment by David Fogel and Kumar Chellapilla in which a neural network was trained to assess a checkers position and play accordingly. More properly, a game-playing program consists of the part which searches (i.e., from my current move what moves can I make and what moves can my opponent make), and then an evaluation function to sort out the best line of play in each case. For this experiment the "rules" of checkers were embedded in a straightforward implementation of the search program. The innovation came in the evaluation function that was implemented by means of a neural net.

The researchers set themselves a high threshold. One of the big issues in work such as this is the so-called "credit" program, in which one tries to understand not only who won but have some idea as to why. They took the bold approach of ignoring this issue altogether. They trained a population of different possible neural nets and played them against each other (i.e., played the different evaluation functions). The successful programs were kept and the least successful programs (nets) were replaced with mutated versions of the successful ones. As they put it, it was as if the programs only found out after playing all their games how many they had won.

The end result of the research was not the world's best checkers program and neither was it intended to be. For that pursuit I strongly recommend a third book, One Jump Ahead: Challenging Human Supremacy in Checkers by Jonathan Schaeffer. However they did manage to produce a program that would beat all but the strongest human players, while trying to keep to an absolute minimum the amount of explicit knowledge about checkers added explicitly to the program. Rather the program was allowed to learn for itself.

At one point they do make the decision to change the layout of the neural net because the first design did not even "know" that the board was 8x8. The original design treated the 64 inputs as unrelated. The performance, while already good, improved considerably when they added knowledge of 2x2, 3x3 and large subblocks. However, I was disappointed that they did not do the following controlled experiment. By increasing the complexity of the neural net it was unclear whether it was the increase in complexity or the extra "knowledge" of the way they increased the complexity that was responsible for the improvement. As a though experiment, I thought it would be good if they could have evolved a population of evaluation functions for the more complex board layout with one change. Before each board was evaluated, the pieces would undergo a fixed random shuffling (always the same). This would give the network the same complexity but with the subgroupings having no real meaning.

I had a chance to meet both the authors in 1999 while attending the Genetic Programming conference (a conference which was since merged with the ICGA, International Conference on Genetic Algorithms) to form GECCO. His father, one of the pioneers in the field, gave one of the keynotes.

The second book, Behind Deep Blue, basically takes the opposite approach. In this case, the explicit aim was to introduce as much chess knowledge as possible while making the search as deep as possible due to hardware improvements. The story is a personal one told by the author of his experiences starting off as a graduate student at Carnegie Mellon and then as a staff member at IBM, Yorktown Heights. I can particularly relate as I was at MIT during this time and later have had numerous friends who were post-docs or staff at Yorktown Heights (and, in fact, gave a talk there once).

I am a lousy chess player and non-existent checkers player although I do play a number of more obscure abstract games. However, I did play chess in high school and had the opportunity to see David Levy play Chess 4.7 at the Canadian National Exhibition in Toronto in 1978. They CNE made a big chess weekend of the event. There were simultaneous games played against Boris Spassky and Anatoly Karpov. In the first event versus Spassky, who was charming in person, names were drawn to fill boards after a board had been assigned to the local masters. I was, amazingly, the first name drawn and since there was no formal dividing line between me and the master player to my right, I like to think that he may have taken me for a master player for, oh about 2 moves! The following day there was a much bigger simultaneous exhibition against Anatoly Karpov. Once again I threw my name into a much larger container and amazingly also got picked for that. Karpov was much quieter although it was hard to tell how much was due to his relative proficiency in English and how much to the fact that he was surrounded by handlers.

During Levy's match there was a board surrounded by the local masters and another for the amateurs with the local chess columnist (himself a master) acting as commentator. We had access, which Levy obviously did not, to how far ahead or behind Chess 4.7 thought it was. In one case Levy had a forced mate just a couple of moves beyond the program's look-ahead. It happily munched pawns elsewhere until the programmers resigned for it.

There was a 13-year-old master who had beaten Fischer's age by a couple of months at the tournament. He was very sociable and was the only local master who wandered over to the amateur board to share his thinking. I may be misremembering, but I thought that it was subsequent US champion Joel Benjamin (I am open for correction).

I believe the game that Levy drew may have been the one in which he tried the Latvian Counter-Gambit which was played in honor of Joe Smolij a local chess personality who would challenge anyone to a game of blitz chess and used to play what he called the "Crash Smash Gambit".

The two books make an interesting contrast and Fogel brings up Deep Blue as an example of what he was not trying to accomplish. Technically, I find Fogel's approach more satisfying due to its generality. However both books are well worth reading.

In some later post I will describe some of he abstract games that most interest me. It has been a long-term goal as a project "for fun" to develop an AI for a few of these games. I have roughly ordered them by what I perceive to be their relative difficulty in programming.

0 Comments:

Post a Comment

<< Home