The man-computer chess challenge began in 1950 when mathematician Claude Shannon wrote an article suggesting the use of a computer to play chess. In 1952, the first computer programmed to play chess was built by Professor Alexander K. Dewdney. However, at that time computers were still too slow and were not able to play chess at a competitive level.
Kaissa was one of the first computers designed specifically to play chess at a competitive level. It was developed in the Soviet Union in 1970 by the Research Group of Computer Science at the Moscow Institute of Applied Mathematics and Mechanics (VNIIA). The name “Kaissa” comes from the Greek word for “queen” and refers to the queen, which is the most powerful piece in chess.
Kaissa was based on a parallel computing architecture and used a combination of depth-search techniques and position evaluation to make its decisions during the game. Its parallel computing architecture allowed it to examine multiple moves simultaneously, making it faster than other computers of the time. Kaissa also used a transposition table to avoid examining the same positions multiple times during its depth search. Kaissa was the first computer to participate in the World Chess Championship for computers in 1974, where it won the title. However, at that time computers were still not able to compete with the best human chess players. Kaissa was surpassed by more advanced computers in the following years, but it remains an important figure in the history of the man-computer chess challenge.
Chess video games in the 1980s were some of the first forms of video games to use artificial intelligence (AI) to simulate playing chess against a human player. Many of these video games were developed for personal computers such as the IBM PC and allowed players to challenge the computer to a game of chess. One of the first chess video games to use AI was “Sargon,” developed in 1978. Sargon used a depth-search algorithm to make its decisions during the game and could play chess at an intermediate level. In 1981, “HiTech” was developed, which used a position evaluation algorithm to make its decisions and could play chess at an advanced level. The 1980s also saw the development of chess video games for home video game consoles such as the Atari 2600 and the Nintendo Entertainment System (NES). One of the most famous chess video games for consoles in the 1980s was “Chessmaster,” developed for the NES in 1988. Chessmaster used a depth-search algorithm to make its decisions during the game and could play chess at an advanced level.
Deep Blue was a supercomputer developed by IBM to play chess at a competitive level. It was specifically designed to challenge world chess champion Garry Kasparov and defeated Kasparov in a chess match for the first time in 1997. Deep Blue was also the first computer to defeat a world chess champion in a full-game chess match. Deep Blue was based on a parallel computing architecture and used a combination of depth-search techniques and position evaluation to make its decisions during the game. It was able to examine up to 200 million moves per second and could evaluate up to 30 million positions per second. It also used a transposition table to avoid examining the same positions multiple times during its depth search. Deep Blue was developed by a team of IBM researchers and used a combination of custom hardware and software to play chess. It made its debut against Kasparov in 1996, where it lost the match. However, it returned to challenge Kasparov in a rematch in 1997 and defeated Kasparov in a series of matches, becoming the first computer to defeat a world chess champion in a full-game chess match.
Deep Blue and the other systems mentioned so far were not artificial intelligence based on neural networks. The first neural networks trained to play chess were developed in the 1980s. One of the first neural networks to be trained to play chess was “CHESS-1,” developed in 1985 by David B. Fogel and Steven M. Drucker. CHESS-1 was a single-layer neural network trained on a dataset of known chess positions. It used an evolutionary learning algorithm to train the neural network and could play chess at an intermediate level. In 1989, a neural network called “GK-1” was developed by Murray Campbell, Albert Zobrist, and Thomas Anantharaman. GK-1 was a two-layer neural network trained on a dataset of known chess positions and used an evolutionary learning algorithm to train the neural network. GK-1 was able to play chess at an advanced level and won the World Chess Championship for computers in 1990.
AlphaZero is a neural network-based artificial intelligence system developed by DeepMind in 2017. It was trained using machine learning to play chess, poker, and Go and achieved an extraordinary level of performance in all three games. AlphaZero uses a deep neural network to analyze game positions and make decisions on what moves to play. The neural network of AlphaZero was trained using an evolutionary learning algorithm called “reinforcement learning,” where AlphaZero was exposed to a large number of game matches and learned to improve its performance through practice. Once trained, AlphaZero is able to play chess, poker, and Go at a level superior to any other neural network-based artificial intelligence or computer developed so far. In the game of chess, AlphaZero defeated the world chess champion computer Stockfish in a series of full-game matches and demonstrated the ability to play chess at a level superior to any other artificial intelligence or computer. AlphaZero’s achievement has been a major step forward in the development of neural network-based artificial intelligence and has demonstrated the potential of this technology for playing chess and other applications.
All images and all text in this blog were created by artificial intelligences