Recently, a friend of mine from Turkey who is writing his Ph. D. on a chess theme asked me how many types of players chess people are? Since I did not know what precisely to asnwer, I decided to publish his study, so that anyone who is interested in this subject can answer him, and discuss the topic. Here it comes:
From Necati Alp ERİLLİ1
1Statistics Dep. / Faculty of Science / 19 May Univ. / Samsun
E-mail address: aerilli@omu.edu.tr
ABSTRACT – Chess is the most popular brain game in the world. Since it has been playing for centuries, we have usually met the same questions typically: “Who is the strongest player in the world?, Can you beat me?, What’s your style?” It is hard to answer these types of questions which need objectivity. Every chess game has an ECO code. These codes help players for preparing to opponents or improving themselves. We classify chess players according to their styles by using ECO codes. These codes are named by A to E capitals and numbers from 00 to 99. In general there are 500 different types of chess openings. Some openings are aggressive and some are defensive. These codes are in crisp data form but results can be in crisp or fuzzy data form. By using fuzzy clustering analysis we can classify players into 3 groups. They are; aggressive player, defensive player or positional player. Results have been tested on some strong players and some amateur players. All these show that we can use fuzzy systems in these complicated problems.
Keywords: Fuzzy Clustering, ECO Codes, Chess, Fuzzy Data
Introduction
Chess is the most popular brain game in the world. It is a board game played between two players. It is played on a chessboard, which is a square-checkered board with 64 squares arranged in an eight-by-eight grid. At the start, each player controls sixteen pieces: one king, one queen, two rooks, two knights, two bishops, and eight pawns. The object of the game is to checkmate the opponent's king, whereby the king is under immediate attack (in "check") and there is no way to remove or defend it from attack on the next move.
In recent years chess became more popular than previous centuries by the help of FIDE (World Chess Federation), chess lessons in schools and World Championship games. Chess not only develops memory, logical thinking, capability but also improves concentration and teaches independence [1]. Chess has long been considered a way for children to increase their mental prowess, concentration, memory and analytical skills. To anyone who has known the game, it comes as no surprise that these assumptions were actually proven in several studies on how chess can improve the grades of students [2].
The main problem for chess players is their graduates. For ranking players a mathematical system called ELO has been introduced by Hungarian Mathematician Dr. Arpat Elo. It has been using since 1970. World Chess Federation expresses ELO list in every three months. By this list, players listed in order to their elo points in tournaments. Players can learn their world ranking or country rankings through this list.
Another point for chess players is their positions among all of the players. It is hard to make a decision for players which category they in. Good player or bad player or defensive players etc. are all linguistic expressions. All they are subjective and can be change for everyone.
In this article we try to classify players whether they are defensive player, aggressive player or positional player. The names are co-decision thanks to chess players around us.
We utilize from ECO codes to classify players. Every chess game has an opening code called ECO code. Every move has a typical task for openings. Some openings ended 6 or 7 moves and some openings ended 20 or 25 moves. Some of them called openings and some of them called defence for names in private. ECO codes are named by A to E capitals and numbers from 00 to 99. In general there are 500 different types of chess openings. These codes are in crisp data form. But we use them as fuzzy data for estimation.
We classify players looking to their game scores and game ECO codes. After a game finishes player can take 1 point for victory, 0,5 point for draw and 0 point for loss. By using these codes and scores which had been taken in games, we can put them in to classes which we determine at the beginning.
With the help of Fuzzy clustering analysis we can classify players into groups which we determined in the beginning.. It is one of the common technique in statistical classification methods.
FUZZY CLUSTERING
Cluster analysis is a method for clustering a data set into groups of similar objects. It is an approach to unsupervised learning and also one of the major techniques in pattern recognition [3]. Hard clustering methods allow each point of the data set to exactly one cluster. Zadeh [6] proposed fuzzy sets that can use for the idea of partial membership described by a membership function. After that many fuzzy clustering methods have been studied [4,5,7,8,14].
Fuzzy Cluster Analysis of Crisp Data
This approach comes into the picture as an appropriate method when the clusters cannot be separated from each other distinctly or when some units are uncertain about membership. Fuzzy clusters are functions modifying each unit between 0 and 1 which is defined as the membership of the unit in the cluster. The units which are very similar to each other hold their places in the same cluster according to their membership degree.
Similar to other clustering methods, fuzzy clustering is based on distance measurements as well. The structure of the cluster and the algorithm used to specify which of these distance criteria will be used. Some of the convenient characteristics of fuzzy clustering can be given as follows [10]:
i. It provides membership values which are convenient to comment on.
ii. It is flexible on the usage of distance.
iii. When some of the membership values are known, they can be combined with numeric optimization.
The advantage of fuzzy clustering over classical clustering methods is that it provides more detailed information on the data. On the other hand, it has disadvantages as well. Since there will be too much output when there are too many individuals and clusters, it is difficult to summarize and classify the data. Moreover, fuzzy clustering algorithms, which are used when there is uncertainty, are generally complicated [13].
In fuzzy clustering literature, the fuzzy c-means (FCM) clustering algorithm is the most well-known and frequently used method. FCM is a method of clustering which allows one piece of data to belong to two or more clusters. This method which is developed by Dunn [9] and improved by Bezdek [11], is frequently used in pattern recognition. It uses Euclidean distance between variables and cluster centers:
d_ik=d(x_i,v_k )=[∑_(j=1)^p▒(x_ji-v_jk )^2 ]^(1/2)
(1)
It is based on minimization of the following objective function:
J(u,v)=∑_(j=1)^n▒〖∑_(k=1)^c▒〖u_jk〗^m ‖x_ji-v_jk ‖^2 〗
(2)
Here; m is any real number greater than 1, u_jk is the degree of membership of x_i in the cluster j, x_i is the i’th of d-dimensional measured data, v_j is the d-dimension center of the cluster, and ||*|| is any norm expressing the similarity between any measured data and the center.
Fuzzy partitioning is carried out through an iterative optimization of the objective function shown above, with the update of membership u_ik ;
u_ik=[∑_(j=1)^c▒(〖d_ji〗^ /〖d_jk〗^ )^(2/(m-1)) ]^(-1)
(3)
and the cluster centers v_jk by:
v_jk=(∑_(j=1)^n▒〖u_jk^m x_ik 〗)/(∑_(j=1)^n▒u_jk^m )
(4)
1≤j≤c , 1≤i≤n
Equations (3) and (4) constitute an iterative optimization procedure. The goal is to iteratively improve sequence of sets of fuzzy clusters until no further improvement in J_m is possible.
The FCM algorithm is executed in the following steps:
Step 1: Initialize the following values: Number of cluster c, value of fuzziness m, termination criterion (threshold) ε and membership matrice U. Here, ε takes degree between 0 and 1.
Step 2: Calculate the fuzzy cluster centroid v_jk for i=1,2,…,c using (4).
Step 3: Employ (3) to update fuzzy membership u_ik.
Step 4: If the improvement in J_m is less than a certain threshold (ε), than halt; otherwise go to step 2.
Fuzzy Cluster Analysis of Fuzzy Data
Cluster analysis constitutes the first statistical area that lent itself to a fuzzy treatment. The fundamental justification lies in the recognition of the vague nature of the cluster assignment task. For this reason, in the last 3 decades, many fuzzy clustering models for crisp data have been suggested as we mentioned.
The fuzzy clustering of fuzzy data has been studied by different authors [20]. Sato and Sato [12] suggest a fuzzy clustering procedure for interactive fuzzy vectors. Yang and Ko [15] proposed clustering model called “Double fuzzy K-numbers clustering model” which deals with a single fuzzy variable on l units. “Fuzzy K-means clustering model for conical fuzzy vectors” proposed by Yang and Liu [16] is applicable to multi-dimensional fuzzy variables observed on l units. Yang et al. [17] proposed a clustering model called “Fuzzy K-means clustering model for mixed data” to classify mixed data like symbolic data or LR-II type data. Hung and Yang [18] proposed a clustering model called “Alternative double fuzzy K-means clustering model” to classify units. Authors used an exponential type distance for LR fuzzy numbers based on the idea of Wu and Yang [19] and discussed the robustness of this distance.
III. DOUBLE FUZZY K-NUMBERS CLUSTERING MODEL
This clustering model proposed by Yang and Ko [15]. It is assumed that the membership function of the fuzzy variable belongs to LR family and the univariate fuzzy data are represented by W_i=〖(m_(W_i ),α_(W_i ),β_(W_i ))〗_LR . Here m is called the mean value of W_i and α and β are called the left and right spreads, respectively.
The authors suggested a distance measure each pair of fuzzy numbers X_jand W_i as follows.
d_LR^2 (X_j,W_i )=1/3 {〖(m_(X_j ) 〖-m〗_(W_i ))〗^2+〖((m_(X_j ) 〖-α〗_(X_j ) )-(m_(W_i ) 〖-α〗_(W_i ) ))〗^2+〖((m_(X_j ) 〖+β〗_(X_j ) )-(m_(W_i ) 〖+β〗_(W_i ) ))〗^2 } (4)
Objective function is given as follows:
J_FCN (μ,W)=∑_(j=1)^n▒∑_(i=1)^c▒〖μ_i^m (X_j ) d_LR^2 (X_j,W_i ) 〗
(5)
Here m>1 is the index of fuzziness and μ=(μ_1,…,μ_c) is a fuzzy c-partition and W_i=〖(m_(W_i ),α_(W_i ),β_(W_i ))〗_LR are fuzzy c-numbers of LR-type. The necessary conditions for minimize (μ,W) of J_FCN are the following update equations:
m_(W_i )=(∑_(j=1)^n▒〖μ_i^m (X_j )[3m_(X_j )+(α_(W_i )-α_(X_j ))+(β_(X_j )-β_(W_i ))〗)/(3∑_(j=1)^n▒〖μ_i^m (X_j)〗)
i=1,…,c (
6)
α_(W_i )=(∑_(j=1)^n▒〖μ_i^m (X_j )[m_(W_i )-m_(X_j )+α_(X_j )]〗)/(l∑_(j=1)^n▒〖μ_i^m (X_j)〗)
i=1,…,c (
7)
β_(W_i )=(∑_(j=1)^n▒〖μ_i^m (X_j )[m_(X_j ) 〖-m〗_(W_i )+β_(X_j )]〗)/(r∑_(j=1)^n▒〖μ_i^m (X_j)〗)
i=1,…,c (
8)
and
μ_i^m (X_j )=〖[1/(d_LR^2 (X_j,W_i ) )]〗^(1/(m-1))/(∑_(k=1)^c▒〖[1/(d_LR^2 (X_j,W_k ) )]〗^(1/(m-1)) )
i=1,…,c ; j=1,…,n (9)
IV.APPLICATION
By using DFKC we try to classify chess players in to clusters according to their style.
There are lots of chess styles according to chess players from amateur to top players. There can be many answers for the question of chess style. We simply categorized players in to 3 clusters: Defensive players, Aggressive players and positional players. Defensive players generally use defensive openings in which colour they are. Their scores depend on their defensive power during games. Aggressive player use sharp opening both in white and black. They use open openings for winning the game immediately. Positional players use the openings according to their opponents. Sometimes they use sharp openings sometimes they use defensive openings to hold the game in safe.
The authorities categorized chess openings in to 6 sub categories. This classification is not official but many people accept that in general use. We can classify openings in to 6 clusters: Open (Starts with 1.e4 e5), Closed (Starts with 1.d4 d5), Semi-Open (1.e4 (without e5)), Hint Systems (1.d4 Nf6), Wing Systems (Generally starts with a, b, c or f pawns) and Other openings (Not grouped to previous ones).
As we stated every chess game has an Eco code. This simple code gives information for the opening. In which category it belongs, its name and its sub-level.
For example B20 to B99 named as Sicilian Defence. But in detail B33 named as Sveshnikov Variation, B44 Taimanov System and B50 Kopec Variation etc.
Numerical Example
Our first player is former World Chess Champion Garry Kasparov. Results have been taken from his best games in his career [21]. Firstly Kasparov’s results had listed according to their Eco codes. Then every sub-Eco code counted as a fuzzy number. For example A0 counted as LR-type fuzzy number (2.33, 1.33, 0.01). Here numbers are mean, left spread and right spread respectively. Every player has games both white and black. So we calculated results for white and black separately. For example Kasparov’s results in white for A and B openings are given below:
Table 1-Fuzzy Numbers for Kasparov in A and B Op.
Mean Left Right Mean Left Right
A0 2,33 1,33 0,01 B0 3 0,01 0,01
A1 2,6 0,8 0,01 B1 2,5 1 0,01
A2 2,28 0,85 0,14 B2 NAN NAN NAN
A3 2,5 1 0,01 B3 2,36 0,91 0,91
A4 3 0,01 0,01 B4 2,36 0,91 0,91
A5 2,33 1,33 0,01 B5 2 2 0,01
A6 3 0,01 0,01 B6 2,28 0,28 0,28
A7 3 0,01 0,01 B7 2,66 0,66 0,01
A8 2 2 0,01 B8 2,63 0,72 0,01
A9 3 0,01 0,01 B9 2,42 1,14 0,01
Figure 1-Seperation draw for Whites’ results
If we look to the figure 1, we can see cluster number is one for white results. Similarly black results are nearly same either. There is only one outliner in the figure but that opening played only 2 times. It isn’t necessary to take it calculate for clustering.
Figure 2-Three style for a particular criterion
We know that fuzzy sets are suited to describing ambiguity and imprecision in natural language and we may thus define these terms using triangular fuzzy numbers as follows:
X_Defensive=(0;0;1,5);〖 X〗_Positional=(1,5;1,5;1,5) X_Aggressive=(3;1,5;0)
These representations are shown in figure 2.
According to Kasparov’s games results, nearly every opening scores gets between 2 and 3. His average in white pieces (2,43; 0,71; 0,06) and in black pieces (2,42; 0,81 ;0,21). His scores generally equal to aggressive in all openings. As a result we can surely say that Garry Kasparov is an aggressive player both in white and black pieces.
Top players of the world chess are belong to aggressive class. This score are expected as well. New world chess champion V.Anand’s average is (2,41; 0,7 ;0,06) in white and (2,4; 0,99 ;0,06) in black or former world champion A.Karpov’s average is (2,426; 0,77 ;0,05) in white and (2,39; 0,88 ;0,066) in black. We can say both are aggressive during their careers.
Here are my scores from my career. I played 397 games which added to official tournaments. Results showed that I am aggressive player with white pieces and positional player with black pieces. In general I am between aggressive and positional player but nearest to aggressive side. But as addition i have to say that my opponents averages not same like top players as well.
Table 2-My scores to opening clusters
WHITE BLACK
Bird Positional Aggressive Sokolsky
English Aggressive Aggressive Queen-Indian
Queen Pawn Aggressive Positional Budapest
Holland Positional Defensive Pirc
Alekhine Defensive Positional Sicilian
Pirc Positional Aggressive French
Scilian Positional Positional Two Knight
French Aggressive Aggressive Queen Pawn
Philidor Aggressive Positional Gruenfeld
Petroff Aggressive Positional King-Indian
Ponziani Aggressive Positional Spanish
Two Knight Aggressive
Spanish Positional
Queen Gambit Aggressive
Queen-Indian Aggressive
My mean values are for white (2,24; 0,87; 0,78) and for black (1,91; 1,01; 1,28).
V.DISCUSSION
There are a lot of methods for fuzzy clustering in fuzzy data. Chess games or chess data never used in this kind of papers. We try to show that these type of irregular data can be used in clustering algorithms. Uses of these kind of data showed that, we can use much linguistic expressions in clustering methods.
VI.REFERENCES
[1] Dr. Robert C.Ferguson. Teacher’s Guide: Research and Benefits of Chess. (2006). (www.quadcitychess.com)
[2] Dean J. Ippolito. The benefits of chess in education. (2006). Benefits of chess for children. (www.deanofchess.com/benefits.htm)
[3] Rencher A.C.,(2002). Methods of Multivariate Analysis, John Wiley&Sons Inc.,UK.
[4] Bezdek, J.C., (1981). Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York.
[5] Dave, R.N., (1992). Generalized Fuzzy C-Shells Clustering and Detection of Circular and Elliptical Boundaries. Pattern Recognition 25 (7), 713-721.
[6] Zadeh, L.A., (1965). Fuzzy Sets, Inf. Control 8, 338-353.
[7] Gath, I., Geva A.B., (1989). Unsupervised Optimal Fuzzy Clustering. IEEE Trans. Pattern Anal. Machine Intell. 11, 773-781.
[8] Höppner, F., Klawonn, F., Kruse R., Runkler, T., (1999). Fuzzy Cluster Analysis: Methods for Classification Data Analysis and Image Recognition. Wiley, New York.
[9] Dunn J.C., (1974). A Fuzzy Relative of the ISODATA Process its Use in Detecting Compact Well-Separated Clusters, J. Cybernet. 3, 32-57.
[10] Naes T., Mevik T.H., (1999). The Flexibility of Fuzzy Clustering Illustred By Examples, Journal Of Chemo Metrics.
[11] Bezdek, J.C., (1981). Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York.
[12] Sato and Sato (1995). Fuzzy clustering model for fuzzy data. Proceedings of IEEE.2123-2128.
[13] Oliveira J.V., Pedrycz W., (2007). Advances In Fuzzy Clustering And Its Applications, John Wiley &Sons Inc. Pub.,West Sussex, England.
[14] Fukuyama Y., Sugeno M., (1989). A New Method Of Choosing The Number Of Clusters For The Fuzzy C-Means Method, Proceedings Of 5th Fuzzy Systems Symposium, pp 247-250.
[15] Yang M.S.and Ko C.H.(1996) on a class of fuzzy c-numbers clustering procedures for fuzzy data. Fuzzy sets and systems,84,49-60.
[16] Yang M.S. and Liu H.H.(1999). Fuzzy clustering procedures for conical fuzzy vector data. Fuzzy sets and systems, 106, 189-200.
[17] Hwang P.Y. and Chen D.H.(2004) Fuzzy clustering algorithms for mixed data feature variables, fuzzy sets and systems,141,301-317.
[18] Hung W.L. And Yang M.S.(2005). Fuzzy clustering on LR type fuzzy numbers with an application in Taiwanese tea evalution. Fuzzy sets and systmes, 150,561-577.
[19] Wu K.L. and Yang M.S. (2002) alternative c-means clustering algorithms. Pattern recognition, 35, 2267-2278.
[20] Oliveira J.V. and Pedrycz W., Advances in fuzzy clustering and its applications (2007). John Wiley & Sons Ltd.,West Sussex,England.
[21] Sahovski Informator 76, VI-IX (1999).
No comments:
Post a Comment