Monday, October 31, 2011
The Sandpile Model
Becca's Introduction (Finally)
Almost each and every one of us has played some sort of game in our lives before. In addition, we have experienced a loss or a win during a game. Have you ever played someone in a game and it seems they just cannot lose? Well if may be this person has studied Combinatorial Game Theory at some point in his or her life.
This branch of applied mathematics is the study of a particular type of games. This typically involves two players and it is assumed that both players know all there is to know about how to play the game. In order to find a winning strategy, Game Theory becomes a means of determining such strategy. In other words, Game Theory takes on the task of finding how to beat even the best player (Demaine & Hearn).
In games studied in game theory, there are several distinctions commonly made. First, it is understood that in normal play, the winner of each game is determined by the last player to make a legal move. However, in what game theorists call miseré play, the goal of the game is reversed. Thus, the loser of the game is the last player to make a legal move (Demaine & Hearn). In addition, there is a clear distinction between impartial and partial games. Impartial games are when the same opportunities of moves are given to each player while partial games are when game play is not equal based on perspective of the players (i.e. chess is a partial game because each of the players are faced with different play opportunities at each turn) (Demaine & Hearn).
One particularly famous and well studied and relied upon Theorem in the field of Combinatorial Game Theory is the Sprague-Grundy Theorem. This states that every finite impartial game under normal play “is equivalent to an instance of the game Nim” (Demaine & Hearn). To better understand this, let us look at the game of Nim itself.
Nim, in normal play, is an impartial game, which is played with two players, A and B. Say there are three piles of objects with an arbitrary number of objects in each pile. The only condition of number of objects in the initial piles is that no pile may contain the same number of objects as another. Play begins and Player A may choose one pile in which he or she wishes to remove objects from. Player A can remove at least one and up to the entire pile. It is only necessary that from only one pile are objects removed in a single turn and at least one object is taken during a turn. After Player A removes a number of objects, it is Player B’s turn with the same conditions of play. The player with the last legal move wins. In other words, the player who removes the last object is declared the winner (Bouton). As mentioned earlier, in miseré play, the player who takes the last object is declared the loser of the game.
Technical Analysis of Combinatorial Take Away Games
There are numerous methods in which game theorists approach the analysis of combinatorial games. Primarily, we will be dealing with two-player, impartial take-away games. The terms and techniques vital to the exploration of these games will be discussed, leading up to an explanation of the Sprague-Grundy Theorem.
We first define some necessary terms and definitions for entering into the field of combinatorial game theory. An impartial game is aptly named in that it refers to the type of game where players have the same moves available to them (in other words, the only difference is that one player makes the first move). Furthermore, we will be studying combinatorial games (between two players) with perfect information, meaning that all information related to game-play is readily available to both players. The success of one player in a mathematical game is thereby dependent on the choices of another. Typically, these games will terminate after a finite number of moves (however, we will perhaps be looking at variations of games in which the act of tweaking them has made them Loopy, or put in a state in which positions can be returned to throughout the game) (Demaine). Essentially, what makes combinatorial games with these particular characteristics so fascinating is that the element of chance we associate with our typical connotation of a game is minimized to the extent of choosing the player to take the first move. Hence, they are purely strategic in nature.
With these characteristics in mind (finite, impartial, alternating, perfect information), as well as the essential notion that game theory analysis is based on how players respond to each other, we can use a rooted tree to map out the moves of a game (Demaine). There is a plethora of ways in which these positions, or nodes, can be studied. A particularly useful technique for discovering the optimal strategy is to work backwards through the game-tree. That is, if a tree be used can be used to map out all of the outcomes of a given game, rational players can determine the best moves with respect to whatever position they find themselves in. In this sense, we can use this method to trace a path through a game to determine the winner of the game (assuming optimal play) from the onset. Hence, we find it useful to characterize games in terms of who should win with optimal play. There are a variety of terms game theorists have coined, but we will refer to a game in which the next player to move has the possibility to win as an “N-game,” and those games where a player must make a move that allows a winning possibility to his opponent as “P-games” (Conway). Furthermore, each node of the tree, or position in the game, can be characterized as “N” or “P,” in that these are really sub-games themselves. We refer to “N” and “P” as the “characteristic” of a game (and henceforth we will not use the description in an ambiguous manner).
There are certain characteristics that are immediately evident about N-games and P-games.” If an N-game means that there is a possibility to win, it follows that from every N-position, there exists a move to put the opponent into a P-position. Similarly, since every P-game means that through optimal play, the next player to make a move will lose, every move made from a P-position will put the next player in an N-position (Conway). These simple notions become crucial elements in setting up proofs for comparisons between games.
We have briefly touched on the concept of combining games for the purposes of comparison. Specifically, we mentioned that adding two games together can be done “disjunctively,” where a player has the option to play a move in either game, or “conjunctively,” where a player must play a move in both games on each turn. We will set aside conjunctive summation for possible use in analyzing variations of games and focus on the disjunctive sum, as it is this notion that is used in determining equality of games. We say that two games A and B are equivalent if and only if A+C has the same characteristic as B+C for any third game C. Logically we can also claim that two games A and B are not equivalent if there exists a third game C such that A+C does not have the same characteristic as B+C. We can verify the equivalence relation by showing the reflexive, symmetric, and transitive axioms of equivalent games A and B.
The reflexive axiom is fairly straightforward, as a game will naturally have the same characteristic as itself. For the symmetric axiom, we need to show that if a game A is equivalent to a game B, then B is equivalent to A. By our definition, A is equivalent to B if and only if A+C and B+C have the same characteristic for any game C. Since A+C and B+C have the same characteristic for any game C, by a simple change in ordering, we see that B+C and A+C have the same characteristic for any game C, and hence, B is equivalent to A. For the transitive axiom, we need to show that if a game A is equivalent to a game B, and a game B is equivalent to a game C, then game A is equivalent to game C. By definition, if A is equivalent to B, then A+D has the same characteristic as B+D for any game D. Similarly, since B is equivalent to C, then B+E has the same characteristic as C+E. Since games D and E are arbitrary representatives of any game, B+D and B+E have the same characteristic, and therefore A+D has the same characteristic as B+E and by extension the same characteristic as C+E. Since we have shown that A+D has the same characteristic as C+E, and D and E were arbitrarily chosen to represent any game, we can make a substitution to say that A+F has the same characteristic as C+F for any game F, which shows that A is equivalent to C. This is precisely the transitive axiom, and since we have shown all three, we can conclude that this relation between games does in fact represent an equivalence relation.
We now proceed to investigate the various disjunctive sums of P and N games to provide a powerful tool which we might use in proving equivalencies of games. We want to see if there is a conclusive characteristic attached to the sum of two P-games. Examples seem to indicate that the result will always be a P-game itself, and we will verify this with an induction proof, with induction placed on the number of moves in the sum of the two games. Our general statement we wish to show is that given any two P-games, their sum will result in a P-game. Our base case is simply the sum of the zero game and itself. Naturally, this sum is a P-game since there are no moves. Our induction hypothesis states that for all P-games A and B such that A+B has at most n moves, A+B is a P-game. We need to show that for any two P-games C and D where C+D has at most n+1 moves, C+D is a P-game.
Note that one step before we reach the n+1 move implies that we have made a move in either C or D. Hence, whichever game we have taken this move in is now an N-game, since any move in a P-game results in an N-game. By definition of N-games, another move can be played in this same game to return it to a P-game. We now have two possibilities at n-1 moves which both represent the summation of two P-games. By our induction hypothesis, each sum is a P-game. Therefore, the possible sums at n-moves are N-games, since there exists a move that will result in the P-games we mentioned. Finally, since these sums are N-games and represent all possibilities of moves from C+D, we conclude that C+D is indeed a P-game. We have not only shown that the sum of two P-games will result in a P-game, but within our proof (the logical process which took as from the nth move, the sum of an N-game and a P-game, to the n-1 move, the sum of two P-games) there is a corollary that the sum of an N-game and P-game is necessarily an N-game. The last combination, the sum of two N-games, is inconclusive, which can be verified by simple examples.
These ideas are incredibly useful in proving equivalencies between games. The technique is a combination of our previous demonstrations: inducting on the moves in the game and verifying their characteristic properties when adding an arbitrary game. This concept gets extended into what we refer to as the Sprague-Grundy Theorem. We go one step further in the notion of equivalent games by creating a standard by which to compare all games. That is, all finite, impartial games has a particular “nimber,” value which denotes its equivalence to a specific heap in the game Nim. One method by which to evaluate games is to calculate its nimber value. It can be shown that every P-game will have a nimber that associates it with the 0-pile of Nim, whereas N-games will have positive nimber values corresponding to a given head in Nim (Conway). Other methods of analysis, such as a game’s suspense and remoteness function (which analyze how long a game can be drawn out or how quickly it can be finished), will also come into play in my research as I analyze a variation of the take-away game Euclid.
Monday, September 19, 2011
Combinatorial Games
The focus of my academic research paper is going to be game theory, and in particular the theory of combinatorial games. Game theory is a branch of applied mathematics used extensively in the social sciences, most notably in economics. Formally, “Game theory studies the behavior of rational players in interaction with other rational players. Players act in an environment where other player’s decisions influence their payoffs.”(Eichberger) In this case, “rational” refers to the standard economic assumption that individuals act to maximize their utility, usually in the form of maximizing their payoff. Games are often used to model markets with a few participants, as the decision of one participant often depends on the decisions made by others. In a game, each player has an available set of strategies. In Game Theory for Economists, Eichburger writes, “A strategy is a complete plan for how to play the game. ‘Complete’ means in this context that for any contingency the plan must specify what the player would do.” Thus a strategy specifies a reaction for any possible situation that could arise in a game. Games are solved by finding an equilibrium strategy for each player, that is a strategy that maximizes the payoff of each player, taking into account the decisions made by others.
It is common to make a distinction between pure and applied mathematics, and the traditional game theory that I have described falls squarely into the applied category. However the combinatorial games that this paper will focus on have very little in common with traditional economic games. Rather than modeling real life situations, combinatorial game theory uses mathematics to search for solutions of games in the common sense of the word, such as Chess or Go. Players in combinatorial games are, in a sense, trying to maximize their payoffs. In this case, however, the payoff consists of either winning or losing. Combinatorial games are sequential games with perfect information. This means that players take turns making moves, and at any given time all players are aware of the moves available to the other players. Outcomes depend strictly on the rules of the game and the decisions made by players. There is no randomness or element of chance. The nature of these conditions allows for elegant solutions, often with results of great interest to pure mathematicians. One such result is construction of the surreal numbers, the largest possible ordered field. (Conway) This was motivated by studies of Go.
For the sake of simplicity, it is assumed that the games consist of a finite number of turns. Combinatorial games can be further classified into partisan and impartial games. In an impartial game, all players have the same set of moves available to them for a given position. This restriction is lifted for partisan games. (Demaine) Chess, for example, is a partisan game as each player has his own pieces that he can control. The emphasis of this paper is going to be on impartial games. In a impartial game, the winner is determined by the last player to make a move. In normal play, the last player to move is the winner. If the last player to move loses, it is known as miserè play. While games are often played miserè, normal play is more commonly used in theory, as it allows for much more conclusive results. One such result is deeply related to the game of strategy known as Nim.
Nim is a simple two player impartial game. At the beginning of the game, the players are presented with any number of objects. These objects are divided between one or more piles, known as heaps. The players take turns removing objects from the heaps. They may remove any number of their choice, with the requirement that they remove at least one object per turn and all objects removed must come from the same heap. (It is understood that in the case of a single heap game, the first player may not remove all of the objects in the first turn.) Under normal play, the last player to remove an object wins. This begs the question, do either of the players have a strategy that will guarantee victory every time? The answer is “yes!”, as was discovered by Charles Bouton back in 1901. In his paper, “Nim, the Game with a Complete Mathematical Theory”, Bouton demonstrates that one player has the ability to turn the game completely in his favor, provided he makes the right move each turn. (Bouton) Essential to this solution is the concept of a safe combination. For each of the heaps on the board, the number of objects in each heap can be represented as a binary number. Bouton defines a sort of addition on these numbers, often referred to as the nimsum. To calculate the nimsum, each bit of the number is added to the corresponding bit of the other number(s) being added, mod2. For example if we use “+” to mean the bitsum, 1001 + 1111 = 0110. If we imagine these binary numbers in a column, each column with an even number of ones is assigned the value “0” and each column with an odd number of ones is assigned the value “1”. If the nimsum of all of the heaps is equal to 0, it is known as a safe combination.
Bouton proves that if a Player A is able to leave player B with a safe combination, then Player A will win the game, provided he does not make any mistakes. A safe combination can be formed by removing the correct number of objects from the largest heap, until the nimsum of all of the heaps is equal to 0. In the case of two heaps, this is equivalent to making the heaps equal in size. Bouton shows that a game with a safe combination present cannot be made into a safe combination, however any other combination can be made safe. Thus if Player A is able to establish a safe combination, he will be able to do so after each of Player B’s turns, until Player A removes the last object and wins. Thus if the initial setup of the game is not a safe combination, Player A can make it safe in the first move and guarantee himself a victory. If the initial setup is safe, this option is then reserved for Player B. To visualize this, imagine a game of Nim with two heaps. One heap contains three objects, the other two. The first player can create a safe combination by removing one of the objects from the heap containing three, leaving two objects in each heap. The second player can either remove one or both objects from one of the heaps. If both objects are removed, the first player can remove the whole other heap and win. If one is removed, the first player can remove an object from the other heap, creating another safe combination. As two, single-object heaps are remaining, the second player may only remove one of these. It is clear at this point that the first player wins.
The importance of Nim comes into play with a result known as the Sprague-Grundy theorem. The Sprague-Grundy theorem states that any impartial game under normal play is equivalent to some single heap game of Nim. (Demaine) What do I mean by equivalent? For any two games, G and H, we can define their sum, G+H, as a new game where the players may choose to move in either G or H each turn. Thus two games, G and G` are said to be equivalent if and only if for any game H, G+H = G`+H. The particular game of Nim that some game may be equivalent to is known as a nimber. For example, the nimber *7 refers to a game of Nim with a single heap containing 7 objects. An interesting consequence of this theorem is that the concept of safe combinations can be generalized to other impartial games. If the player whose turn it is can leave a safe combination, that player is said to be in a winning position. This is also known as an N-game, with the ‘N’ referring to the next player being in the winning position. (Notice that in this case, “game” refers to the current position rather than the game as a whole. This is OK, as each position within a larger game is equivalent to some smaller game with the second player now going first.) The opposite situation is referred to as a P-Game, with the previous player being in the winning position.
The Sprague-Grundy theorem allows us to consider any impartial game as equivalent to some variation of Nim. There are several such games that can be considered. One example can be found in the article “Take Away Games” by Allen J. Schwenk, who explores games where the number of objects one is allowed to remove is dependent on the amount removed in the previous turn. Even in this specific subclass of games, there are plenty of variations. Over the coming weeks, I hope to explore as many variations as I possibly can, looking for unanswered questions and possible avenues for original research. I very much look forward to sharing my results as this project progresses.
Sunday, September 18, 2011
Sandpiles!
“If you keep your head in the sand, you don’t know where the kick’s coming from.”
-Herbie Mann, jazz flautist
There’s nothing like a social scientist to take a wind out of your math-researching sails. Yesterday, I was sitting in a coffee shop doing some work when a man sitting nearby took off his headphones to make a comment on my t-shirt—my favorite t-shirt—which proudly displays a colorful fractal image and the words “Fractal Geometry” on its front.
“Is this the kind of thing you’re into?” he asked, pointing at my shirt and then to the few math textbooks I’d been reading. “You do math in your spare time?” Excited by the opportunity to talk math on a weekend, I immediately volunteered that yes, I did do math for fun, and that I was starting to do some research.
“Math research?” he said, looking puzzled, “Can you even do that?”
“Sure. Why not?”
“Oh, I dunno. I do research on the housing market, which is changing all the time. Math just seems pretty absolute.”
“That’s fair; unfortunately, math is taught that way quite a bit, but you can definitely still do research. Mathematicians ask questions and try to answer them—just like any other research. Check this out.” I showed him some of the pretty sandpile pictures I’d been looking at on my laptop.
“Oh cool, what are you studying?” he asked, as I scrolled through a few pages.
“Sandpiles. I’m studying…sandpiles.” He nodded his head and smirked a little before turning back to his coffee.
“Wow. Looks crazy. I’ll let you get back to that.”
As soon as I’d said the word “sandpiles,” I realized how silly it must have sounded. I thought about trying to get the guy’s attention again to further explain myself, but the moment was lost. His headphones were back on, and he was already busy typing on his laptop. If only I had chosen to research something that sounded less childish. Maybe then I’d have had a chance at keeping his attention.
Luckily, within the blogosphere, I have a captive audience to with which I can share my sandpiling discoveries. In the remainder of this post, I’ll introduce what I’ve learned thus far about sandpiles in the form of a general overview. At this point, my knowledge is limited to the work of authors that have come before me. My hope is that, in the coming weeks, I’ll have something that is my own.
What is a sandpile?
A sandpile is exactly what it sounds like—it’s a pile of sand. Examined at any given moment, such a pile is fairly uninteresting, consisting of individual grains distributed in smaller piles across a surface; however, observed over a period of time, the behavior of a sandpile closely resembles other dynamical systems that arise within a variety of natural phenomena, including electrical networks, gravitational fields, earthquakes, quasars and the distribution of matter in the Universe (Dhar, Bak et. al).
What these systems have in common is a tendency towards what is called self-organized criticality, or SOC. The term, coined by (Bak et. al) in their 1988 article titled “Self-Organized Criticality,” describes two basic characteristics present within dynamical processes mentioned above; first, criticality refers to the unpredictable evolution of a dynamical system to a predictable and stable, critical state which is invariant under the specifics of that evolution; second, self-organization describes how, even in the presence of high degrees of spatial and temporal freedom, the critical state can be determined by a small set of parameters.
In the same article, (Bak et. al) introduced sandpiles to model SOC through a series of numerical simulations, where avalanches, called “topplings,” were created by randomly adding individual grains of sand to a pile over time. Their sandpile model was basic in its structure and design, and paved the way for all subsequent sandpile research. Beginning with a finite N x N lattice of points to act as the resting surface of a sandpile, they assigned each column of points in the lattice a critical sand height zc based on an ideal sandpile slope (zc > zc+1 ).
Following an original distribution of sand on the lattice, grains were randomly added to the pile. Toppling occurred any time a lattice point reached its critical height, at which point two grains of sand would be sent to neighboring lattice points, one to the “right” and one “up,” often resulting in lengthy chain reactions. Simulations would end when piles had reached a critical state such that the height of each lattice point (x , y), denoted z(x , y), was less than its critical height, and that the introduction of a grain of sand, regardless of it’s location, would result in the same state. As the authors noted in their paper, critical states were only guaranteed in the presence of a sink, which was identified with all lattice points (N , y) and (x , N). Below, I’ve included an example of a critical sandpile from their original simulation:
As the research of (Bak et. al) relied heavily upon empirical data collected from simulations, their results were mostly qualitative. This is not to say that their work wasn’t valuable to mathematicians. Most notably, (Bak et. al) hypothesized that critical states were invariant under the order of avalanches, an idea which would go on to inspire years of mathematical research.
Following the advent of the (Bak et. al) sandpile, other mathematicians began constructing their own sandpile models, which soon led sandpile research into familiar mathematical territory. In 1989, Dhar constructed a sandpile model, again using a lattice structure, but with an adjusted toppling condition and critical height structure which deviated from the gradient structure of (Bak et. al). Under his toppling condition, toppling lattice points sent grains to neighbors in all four directions, and all four edges of the lattice were identified with a sink. As it turned out, Dhar’s sandpile model was highly algebraic in structure and, using foundations of Markov processes, he was able to divide the set of stable sandpiles into two classes: recurrent and transient. Recurrent sandpiles, as he first described them, were stable piles which would recur under repeated application of sand to any single location. That is, for any recurrent pile C, there existed an integer mi such that aim C=C for all i (where the operation ai C indicates that a grain is added to pile C at lattice point i) Further, he was able to prove that the set of all stable sandpiles, when restricted to recurrent piles, formed an abelian group with nontrivial identity under the operation aC.
Following Dhar, research on sandpiles has only grown in its sophistication and subtlety. In the following weeks, I hope to fully unpack the work of Dhar and move into more modern sandpile research, which studies piles on a wide variety of connected graphs, and comes under an equally large variety of names (chip-firing, burning, the dollar game, and more).
My goals for future posts:
-Prove that Dhar’s classification of stable piles into recurrent and transient sets is legitimate
-Prove Dhar’s lemma that the operation aC is abelian
-Verify an algorithm for the construction of an identity element I under the operation aC.
-Show that the set of stable sandpiles of a lattice forms a commutative monoid.
-Impose a structure of ideals onto the set of stable sandpiles of a lattice and show that the minimal ideal of that monoid forms a group of recurrent sandpiles.
-Find out what a Tutte polynomial is.
-Find a way to make pretty pictures like this one, the identity element of a recurrent sandpile group of a 523 x 523 lattice, taken from (Levine, Propp).
A Basic Introduction & Research interests
Leonardo of Pisa was the Italian mathematician of the late twelfth century and early thirteenth century known famously as Fibonacci. In 1202, he published the book Liber Abaci in which he subtly presented the Fibonacci sequence through a problem regarding the reproduction of a one pair of rabbits. The problem stated: “A pair of adult rabbits produces a pair of baby rabbits once each month. Each pair of baby rabbits requires one month to grow to be adults and subsequently produces one pair of baby rabbits each month thereafter. Determine the number of pairs of adult and baby rabbits after some number of months. It is also assumed that the rabbits are immortal.” Edouard Lucas brought attention to the sequence again and called it the Fibonacci sequence in the nineteenth century. Lucas studied the sequence extensively and generalized to a case where $L_1 = 1, L_2 = 3, L_n = L_{n-1} + L_{n-2}$. The sequence is now called the Lucas sequence.
Although the sequence is named after him, Fibonacci was not the first one to have described it. Notions of the sequence appeared in India as early as 200 BC, in particular, with Sanskrit prosody.
By the twentieth century, interest in Fibonacci numbers rose in the mathematics community. The Fibonacci Association was founded in 1963 by mathematician Verner E. Hoggatt; the association began to publish the journal The Fibonacci Quarterly. International conferences to discuss Fibonacci numbers began in the 1980s.
We define the Fibonacci sequence $\{f_n\}$ as follows:
$$f_1 = f_2 = 1$$
$$f_{n+1} = f_n + f_{n-1}$$ where $n = 2, 3, ...$.
Fibonacci numbers have appeared throughout mathematics. They are used in looking at how long the Euclidian algorithm takes to run. Approximate conversations between miles and kilometers can be made if the miles given happens to be a Fibonacci number. In such a case, it’s conversion to kilometers is approximately the next Fibonacci number in the sequence. The sequence can be found by adding the entries along each shallow diagonal of Pascal’s triangle. Variations of the Fibonacci sequence include the Negafibonacci numbers (indexing includes negative integers), the Lucas numbers mentioned earlier, and the Pell numbers ($p_n = 2p_{n-1} + p_{n-2}$).
Apart from their many applications within mathematics, Fibonacci numbers appear vastly throughout nature. If we were to look at the arrangement of petals in flowers, they come in groups of 3, 5, 8, 13, $\dots$. These are remarkable the Fibonacci numbers. Having flower follow this patter allows for optimization of space given the time seeds will appear in the flower head’s center. Leaves on the stems of plants are also arranged in a similar way.
The sequence is also connected to the golden ratio $\phi = \frac{1+ \sqrt{4}}{2}$. In particular, if we form a sequence using ratios from terms of the Fibonacci sequence, this sequence $\{\frac{f_{n+1}}{f_n}\}$ approaches the golden ratio as $n$ approached infinity.
----
Right now I've in looking at generalized Fibonacci numbers, which are defined in a couple of the papers in the Dropbox. I would really like to incorporate field/Galois theory into my research while looking at this, so when I search for articles right now this is what I'm looking for. I don't have a specific research question yet.
Saturday, September 17, 2011
Introductions
Corey Levin
ARP introduction on Nim
The mathematical field of game theory displays the calculated circumstances where success is based on the choices of others (Myers). People (referred to as players), theoretically make rational decisions towards achieving a goal, and if that goal is achieved, we say that the player has won the game. Usually, this accomplishment is attained by applying an optimal strategy, which is the best strategic approach to a given game. The beautiful thing about Game Theory is that there are deep mathematics and a multitude of extensions to the games which are relatively simple to understand. There are various categories of within the field of game theory, but I will be focusing primarily on combinatorial games.
Combinatorial game theory typically involves two players, often referred to as Left and Right. These games are characterized by their multiplicity of moves. One aspect of these games that makes them so mathematically interesting is that no randomness or hidden information is permitted. We say that all players know all information in combinatorial gameplay (perfect information). Therefore, determining the best way to play is purely strategic (Demaine). Further assumptions are that the game terminates after a finite number of moves and the winner is unique (there are, of course, some exceptions to these assumptions, but this is the basic setup). In normal play, the winner is the last player to make a legal move. This condition is reversed in misere play, however, when the winner is defined as the player who first cannot make a legal move. Some games are designed in such a way that it is possible to return to a previous position. We call these games loopy. Finally, we say the game is impartial if the same moves are available to both players from the same positions (that is, gameplay is identical regardless of player perspective) (Demaine). If this is not the case, we define the game as partisan. The most common method for analyzing a combinatorial 2-player game is with a rooted tree. Branches of the nodes correspond to potential moves by either the Left or Right player. Most of the interest in this field has focused on the various ways of manipulating, analyzing, and interpreting these trees for various games (Demaine).
A result that hones in closely to my particular topic is the Sprague-Grundy Theorem, which states that every finite impartial game is equivalent to an instance of the game Nim. Beyond that, it has been shown that every impartial, two-player, perfect information game has the same value as a single pile of Nim (Demaine). However, this result does not make much sense without an explanation of the game itself.
Nim is classified as a “take-away” game, defined as a 2-player game where each player alternately diminishes a stock of markers until the last marker is removed (in normal play this denotes the winner) (Schwenk). These games are arranged in a way that immediate victory is impossible (it would be uninteresting if a player could simply take all of the markers instantly), and that will always terminate. The optimal strategy generally is to reduce the pile to a certain “safe combination,” leaving the opponent with only options of making the pile unsafe. This process iterates until the least marker is drawn from after an unsafe combination is left. This can best be thought of with modular arithmetic (with a turn allowing one to remove up to m markers and optimally seeking to reduce the pile to 0mod(m+1)). This will be defined more rigidly and illustrated later in the paper. It should be evident, however, that once an equivalence of 0mod(m+1) is achieved, the opponent cannot reach that same equivalence, and therefore must leave the pile(s) unsafe (Schwenk). There are certainly other ways to perceive of an optimal strategy, and particularly methods are best determined with respect to specific games.
Nim is a 2-player game with objects placed in three piles. A player takes at least one object from a single pile (and at most the entire pile). The last player to take a token wins the game. Charles L. Bouton has proved that if Player A leaves a safe combination on the table, Player B cannot. This can be repeated until A wins. Safe combinations here refer to a particular type of binary addition, which we refer to as taking the Nim-Sum. Column addition is performed, and when all columns sum to a congruence of 0(mod2), we have a safe combinations (or we say that the Nim-Sum = 0). Charles Bouton proved that a player can derive a safe combination from any unsafe combination (and similarly, a player cannot play a safe combination if it is already safe). Thus, if Player 1 approaches a game that begins in an unsafe combination (which Bouton also proved is most probable) he can ensure victory.
A plethora of variations and extensions followed since Bouton’s original proof in 1902. E. H. Moore presented a generalization of the game in 1910. He refers to this as Nimk, where the original was Nim1. Here, a player takes as many counters as he so chooses and disperses them into any number of piles can choose up to k piles from which to draw from (for obvious reasons, the game is always played with a greater number of piles than k). John C Holladay extends upon this with his variation of the game in 1958. He calls it Matrix Nim, and it involes arranging piles into finite rows and columns. A player has the option of reducing the size of any number of counters in a given row or column (provided they leave one column untouched). For both of these variations, authors have successfully shown the same conditions of the optimal strategy of Nim. That is, given a safe position, any move leads to an unsafe position, and given an unsafe position, any move leads to a safe position.
These authors have built upon a simple, but mathematically rich game, and extended it to prove some other condition that might be valuable within the field. There are a variety of interesting ways to analyze a game, but game theorists rarely limit themselves to the exploration of one game. Sometimes it is of more interest to analyze many games and try to extract some common principle. That is, with respect to games G and H, we might choose to look at the compound of G+H. In this respect we have to see how to win G, H, and the both of them at once. G+H can refer to the disjunctive sum, where each player must play a legal move in either G or H. It can also refer to the conjunctive compound, where a legal move must be played in G and H (Conway).
Similarly, we might try to break up a game into various sub-games to get a better analysis. Games on a checkerboard might be broken up into smaller 2x2 versions, and when principles are understood in these games, they can be combined and extended to the greater versions. Other things game theorists may look into are the remoteness of pieces. What this means is that a piece that might be pivotal for a winning move is much less remote than one that is almost unnecessary. Often, one can work backwards from this to derive an optimal strategy. Another notion is that of a suspense function. This is where one can determine how long a win can be drawn out, or on the contrary, how quickly a loss can be succumbed to (this is based on certain assumptions of the human condition) (Conway). These aspects, and others I will learn along the way, are some of the qualities I hope to pay particular attention to as I begin my analysis of Nim and its variations.