{ subscribe_url:'//loc.gov/share/sites/library-of-congress-blogs/geography-and-maps.php', }

[Computing Space VIII] Games Cartographers Play: Alphago, Neural Networks and Tobler’s First Law

Today’s post is the eighth in a year-long series called,”Computing Space,” which highlights new mapping technologies and new areas for cartographic innovation, along with stories of the lives and work of many of the mostly unknown cartographers, geographers, mathematicians, computer scientists, designers and architects who both now, and in the past, have had a hand in the development of computer cartography and its applications.


It’s perfectly possible that we are the production of some very powerful complicated programs running on some big computer somewhere else. And there’s really no way to distinguish that from what we call reality.

                                               –Marvin Minsky (1927-2016), Founder, Artificial Intelligence Laboratory, MIT

Lee picked up two black stones and lightly tapped them against the bowl. The sound echoed crisply through the room, and it felt as if time had frozen. I knew Lee was about to resign. Afterwards, many people said that I must have been happy in that moment. Now the world would know AlphaGo’s true strength, and I would not be the only one it had defeated. The truth is, in the moment when he tapped those two stones, my mind was blank. Although I had known this day would come sooner or later, it had always felt so remote before, suddenly, it arrived. I felt as if I had been cast into outer space, adrift on waves of nothingness.

       –Fan Hui 2P, Gu LI 9P and Zhou Ruiyang 9P


In 1979, the mathematical cartographer Waldo Tobler published an article called, Cellular Geography, which was destined to become one of the seminal theoretical articles in the history of geography’s analytic revolution. The article, which first introduced the notion of Cellular Automata into the geographical sciences, appeared in the unlikely titled collection the Philosophy of Geography, in a series called the Theory and Decision Library, edited by Gale and Olsson. In recent years, cellular automata have become extremely important in geographic simulation as they are discrete, abstract computational systems, whose behavior is capable of generating models of emergent and non-deterministic phenomena. From these simple beginnings, Tobler dreamed of a day when we could model geographic space and, map our activity in it, in a way that might lead to better urban planning,  resource allocation, lower environmental impact, and hence, to a higher quality of life for all us.

Tobler’s article begins with a short scene from the film Moby Dick,

Captain Ahab, in the film version of Moby Dick, searches for the white whale with the aid of a geographical map on which are noted sighting-frequencies within 5° cells bounded by lines of latitude and longitude. The written version of the story, dating from circa 1830, does not contain this scene, but the technique of recording geographical data in this fashion is increasingly popular today. One of the motivations for the use of such partitionings is their ‘objectivity’. It is also asserted that there are advantages for analysis purposes over the irregular spatial polygons defined by political jurisdictions. There is no doubt that there are notational simplifications; one can index a cell of an array in the same fashion as in matrix algebra. Thus the cell in the ith row and jth column becomes the cell i,j. Geographical data which pertain to that cell can be referred to by subscripts, as g(ij) for example. If one lets G represent an N by M array of such cells then this can be considered isomorphic with a portion of the surface of the earth (if one deletes the poles and makes a convention about the edges). But one can also apply matrix algebra to this array and can obtain geographically interesting results.

Although we get little in the way of literary illusion or any sense of the cinematography, in his paper Tobler sets up an interesting methodology that employs matrix notation and linear algebra to define several different models of geographic change, formalized by dividing a space to be analyzed into cells.

Schematic Diagram of Tobler's Five Models taken from Celluar Geography, 1979

Schematic Diagram of Tobler’s Five Models taken from Cellular Geography, 1979

In considering cartographic and geographic space Tobler tried to define how it is causally connected, and was seeking to understand geographic processes in time.  In other words, he  devised a simple computer model connecting how events and variables that exist in one of the grid spaces effect the grids neighboring it. To this end he considered five models of change:

1. Independent and that depend on random variables and events not related to  neighbors

2. Functionally dependent and that depend on previous land use at that location

3. Historical and that depend on several previous land uses at that location

4. Multivariate that are dependent on several other variables at that location

5. Geographical where land use at location grid i is dependent on the land use at  neighboring locations.

The most important of these models for Tobler, and for later geographic analysis and cartography, is of course the fifth, which is an instantiation of what has become known as Tobler’s First Law of Geography. The Law, simply stated by Tobler in 1970, appeared in a paper that attempted to simulate urban growth in Detroit says:

I invoke the first law of geography: everything is related to everything else, but near things are more related than distant things.

In invoking this kind of causal law Tobler was influenced by lines of thought outside of geography that were related not only to the computational capabilities of the time, but also to a game, called The Game of Life, invented by the mathematician John Conway, and which was popularized by the Martin Gardner in an article published in Scientific American in 1970.

Simple Output from John Conway's Early Game of Life.

Simple output from John Conway’s Cellular automata, the Game of Life.

The Game of Life has striking similarities to Tobler’s models in that it divides the world into cells and has particular rules for how the cells change over time. These changes are dependent only on what is happening in neighboring cells, and the present time as in any Markov type process. The game can be defined formally by four simple axioms:

  • The world of the game is made of Cells—geometric grids as objects in n-dimensional space that manifest some adjacency.
  • Each cell can take only one state value at a given time from a set of possible values.
  • The state of any cell depends on the states of the other cells in the neighborhood of that cell.
  • A set of transition rules that drive the changes of state in each cell as a function of what is happening in the cell’s neighborhood.
Image of a three-dimensional cellular automata. Image Courtesy of Harry Fu, San Jose State University.

A Three-Dimensional cellular automata. Image Courtesy of Harry Fu, San Jose State University.

From these simple axioms a huge range of complex behavior can be modeled including the chaotic dynamical systems that make up most  real world phenomena. One of the most striking features of cellular automata is the fact that their behavior cannot be predicted, and hence, the  computer simulation actually has to be run in order to determine the spatial patterns that emerge from these simple rules. (The Game of Life is still and active area of research today.)

Tobler’s First Law has had a long life in geographical analysis and was originally stated in 1970 as a justification for the simplicity of the models that Tobler was proposing for urban growth patterns.

Urban Growth Patterns from Tobler's Simulation of Detroit, 1970.

Urban Growth Patterns from Tobler’s Simulation of Detroit, 1970.

Used as heuristic in his simple simulations, the First Law seemed at the time a reasonable assumption that is, perhaps, losing much of its applicability in today’s networked and global world, where events on the other side of the planet have immediate implications in distant locations. For geographers, the once stable concept spatial distance (how near and far relate to one another), has become an interesting computational and modeling question, and an area with both new conceptual and mathematical possibilities

Dealing with the problem of simulating geographic events and spaces that are distant and whose complex causal effects operate from afar are new kinds of concerns for modern geographers and cartographers and are very different from the events and spatio-temporal phenomena that could be brought under Tobler’s Law. The increasingly complex non-spatial networks of interaction that now span the globe are difficult to grasp with typical deterministic mathematical models. For geographers, some of these difficulties might once again be overcome by looking to another simple game that generates complexity beyond human imagination, and that has recently been broached with neural networks and reinforced learning algorithms.

Two weeks ago a milestone in artificial intelligence (AI) research was passed when a computer program, known as Alphago, developed by Google’s DeepMind, defeated a high-ranking professional player in the ancient game of Go.

Aja Huang making the first move for Alphago, Lee Sedol, one of the greatest players in the 3000 history of the game. Image courtesy of Google's DeepMind.

Aja Huang making the first move for Alphago, against Lee Sedol, one of the greatest players in the 3000 history of the game. Image courtesy of Google’s DeepMind.

Unlike Chess, which can be programmed and solved using brute force search algorithms, and a few accompanying heuristic rules, which allow a computer calculate a particular value, or score, to each position on the board, the game of Go has a complexity far beyond any search or heuristic framework.

The number of possible go positions in any given game was only calculated accurately earlier this year and is the mind-stretching number,




888,314,427, 129,715,319,317,557,736,620,397,247,064,840,


Like the Game of Life, the game of Go has amazingly simple rules. Each player sitting in front of a 19×19 square board places either a black or white stone at one of the intersections on the board in an attempt to control territory and to surround and capture his or her opponents pieces. This simple set of rules produces unimaginable spatial complexity.

Typical end of Game Go Board Position.

Typical end of game Go Board Position.

Unlike the Game of Life, the game of Go can experience long range effects, a move in a distant part of the board may effect the outcome of the game as the number of pieces on the board grows. Because of these long range effects and it’s extreme spatial complexity, it was thought, by most AI researchers that a computer victory over a human professional was at least a decade away.

According to neuroscientist and founder of Google’s DeepMind, Demis Hassabis, “games are the perfect platform for the testing of AI algorithms,” as they have very definite outcomes and, because you can be better or worse at them, they contain very good metrics that allow you to understand your progress. Games are the perfect platforms to test new modeling theories and computational strategies, especially in situations where the dynamics of a system cannot be modeled deterministically.

The kind of spatial dynamics in the game of Go run both parallel and counter in many ways to Tobler’s first law and the spatial geometries that  develop in the game remind one of some of the more complex problems facing spatial analysis and urban planning today.

Small section of the combinatorial game tree for the game of Go. Courtesy Goggle's DeepMind.

Small section of the combinatorial game tree for the game of Go. The branching factor, or the number of possible moves in any position is about 200 for Go, which is an order of magnitude more complex than the 20 found in the game of chess. Courtesy Goggle’s DeepMind.

The programming of Alphago did not rely on the type of brute force calculations that have brought computers expertise in so many other games, like Chess for example, but rather on what are called deep learning convolutional artificial neural networks. This kind of programming is more suited to the intuitive nature of Go and to human learning and spatial cognition in general. The exact details of the programming are of course held closely by the DeepMind researchers, but in January they published an article in the journal Nature that outlines their techniques.

The programmers used two neural networks, one called a policy net, the other a value net, to reduce the number of possibilities from the huge combinatorial forest mentioned above, thereby making the search space tractable, combined with Monte Carlo Tree Search to look deeply for the most probable continuation of moves for both computer and opponent.  The names policy and value come from the language of reinforcement learning algorithms where a policy function models the behavior of an agent or actor and a value function determines the “score or reward” of each state or action  The combination of the two networks, which employed these kinds of reinforced learning algorithms, reduced both the breadth and the depth of the search space.

Deep Neural Networks used by Alphago. Courtesy Google's DeepMind.

Deep Neural Networks used by Alphago. Courtesy Google’s DeepMind.

Neural networks are groups of artificial “neurons” capable of learning complex activities. The DeepMind programmers trained their networks on 30 million Go positions from a database and also played these networks against themselves in millions of games before playing top rated human players.

In some sense Neural Networks are just very complex mathematical models that have thousands or even millions of tuning parameters that can be adjusted to change the computer’s behavior. In evoking the word “learn” what I really mean here is that the computer kept making adjustments to these parameters that also made improvements in how it played the game. It is important to stress that these networks do not need heuristics or rules that define how to play the game of Go well, they rather “learned” how to play well by actually playing millions of games and by understanding that the goal of the game was in the end to win.

The deep learning networks used by Alphago had seen more Go games and positions than any player could have seen or played in several lifetimes. The first victories for the program came in a match played against the European Champion, Fan Hui who was beaten in all five games he played against the program.

Game Positions for the Five Games in which Alphago defeated Han Fui. Image Courtesy Google's DeepMind.

Game Positions for the Five Games in which Alphago defeated Han Fui. Image Courtesy Google’s DeepMind.

Once confident that it could beat lower level professional players, the Alphago team turned their attention to one of the best players in the world, named Lee Sedol (for the DeepMind Youtube Videos with game commentary). Lee is a professional and legendary Go player from South Korea who has the rank of 9-dan, the best possible ranking. Two weeks ago Alphago beat him 4 out of five games in a match watched by millions of people around the world. The only game Lee won was game four, played on March 13th, in which he played a stone at move 78 of the game that has been called from “the hand of God,” for its brilliance.  The Go website GoGame Guru called the game, “a masterpiece…that will almost certainly become a famous game in the history of Go.”

Screenshot of live game feed from the Alphago/Lee Sedol Match. Courtesy Google's DeepMind.

Screenshot of live game feed from the Alphago/Lee Sedol Match. Courtesy Google’s DeepMind.

Alphago is important not only as a milestone in AI, but also because of its generality. It is not simply a specialized game playing program, but was able to reduce an extremely complex spatial problem to a solvable computer model without specialized heuristics. Geographers and computer cartographers should take notice because of its ability to recognize particular kinds of spatial patterns based on reinforced learning and because of its ability to plan over long distances and times.

Although, for a Go player like myself, this seemed to be a strange moment in the history of the game and one that saw the end of human domination in yet another realm that many thought impervious, at least for a little while longer, from computational tractability, it also proved something more revolutionary, namely, how long range planning, spatial intuition and non-local effects could succumb to particular kinds of deep machine learning algorithms. A important counter example for the need for spatial analysis heuristics and assumptions like Tobler’s Law, which is at the roots of so many geographical models today.

Alphago’s method of learning is beginning to make its way into the same kind of problems that fascinated Tobler almost fifty years ago. Researchers have used neural networks to model the complicated dynamics and the multiple relationships between human decision making behaviors and land use. These models are able to forecast and predict, based on real human behavior and the physical nature of the topography, how land might be developed in the future, by learning how we have done it both successfully, and miserably, in the past. They are being used to model long-range effects of population in transportation networks by learning from thousands and thousands of examples the intricacies of how we move and what habits we develop as we traverse around in the world. The experience of our real behavior is the data that trains these networks with the millions of experiences that could not possibly be comprehended by many humans in many lifetimes, and like the 30 million Go positions that Alphago used to learn how to play the best move, yields a probability that we will act in one way and not in another in a given real world situation.

Tobler’s dream indeed.