In EVOLUTION IN THE COMPUTER AGE - Proceedings of the Center for the Study of Evolution and the Origin of Life, edited by David B. and Gary B. Fogel. Jones and Bartlett Publishers, Sudbury, Massachusetts (2002).
Computer Models of Cultural Evolution
There is a new epistemology, a new way of investigating social science and culture. It is founded on philosophical theories of knowledge that embrace computation, culture and evolution. Computer models of cultural evolution are no longer out of reach. The price of a new computer has dropped; the power has increased; the difficulty of writing software has eased. A new world of understanding waits for those who try to translate theory, represented in written texts, to ideas, expressed in working code as computer applications running on desktop machines. The process of creating simulations forces clarity on the objects of attention and processes of interaction. Once constructed, a cultural model written in programming language can be turned on and its entailments explored. Vague ideas previously confined to natural language may be objectified, vetted and put through rigorous test runs. As a more sophisticated analog to thought-experiments, narratives of different “what if” scenarios may be played out, examining a full range of given input variables and the output behaviors that result. From these WOULD-BE WORLDS (Casti) the investigator can delineate an entire envelope of possibilities. Computer models are objective in the sense that they are accessible, open to public scrutiny and amendment. In a deterministic world with physical constraints on predictability, they are insightful in the general sense of setting limits to what can and cannot be.
Within anthropology, we have a small and dedicated group of people on one side of the digital divide creating computer simulations, models that embody new ways of thinking and knowing about culture. On the other side we have those who study us who do the modeling, but from the pre-computational perspective of turn-of-the-century ethnography. They see our practices and compare them to pre-industrial religious beliefs. Folks in the middle conflate these two approaches and dismiss the struggle with amusement, choosing not to engage in the deep and fundamental issues widening the gap between the two. Each group stands on the opposing bank, buttressing a lengthy span across a philosophical partition separating profoundly incompatible epistemologies. The situation is not unique to anthropology; the same stances are also taken, to varying degrees, among the other social sciences as well. What is at stake is nothing less than the need to re-evaluate what it means to do social research in the sciences and the humanities. The argument comes down, quite literally, to re-examining the meaning-generating process of “re-“ when we ask the question, “How should we re-cognize and re-present cultural knowledge?” The emphasis is on the “re” of how we should recognize and represent our culture experience. The struggle is by no means new, but it has been continually redefined for us ever since the 1950s, since the discovery of computation and the advent of digital computers. This chapter is intended to engage the reader with some new ideas as we begin to rethink and reinvent the social sciences.
Let’s look at where we stand with respect to the other sciences. If rising trends in new ideas are seen as harbingers of progress, then the non-social sciences have succeeded admirably in applying chaos and complexity theories and computer simulation in their work. Leslie Henrickson (page 2) charts the occurrence of keywords related to those new sciences of complexity in 5000 journals over the last 30 years. The climbing curve represents the “hard” sciences, with articles on non-linear science applications (Sci-NL) tracking closely those on computer simulation science (Sci-CS). For the social sciences (SSci) the picture is not so bright. We have barely moved ahead. Why have we lagged behind? Perhaps we’re not used to thinking computationally? Perhaps the price of entry and learning curves have been too high? Perhaps culture is so much more complex than the phenomena in the other sciences that our use of their techniques should not serve us as a model? I will argue that the main reason for our lag is the first, and that we can correct it with a greater understanding of the benefits of computer modeling.
The computer is not as much an invention as it is a discovery. Computers neither began, nor will they end, with the technological medium of electronic circuitry etched onto silicon. It is important to realize that what constitutes a “computer” changes through the years, coevolving with the cultural and technological practices of the period. Significantly, it also means that there is continuity in the concept of computation that transcends the cultural milieu and the technological medium in which those computations take place. The computer, as a box of circuit boards and chips, may be more apparent than the concepts underlying its creation, but it is the processes that we have discovered and captured in its operation that are, in the long run, more important than physical form.
1950 is a convenient date to fix the growing excitement and dis-ease about computers. As automation was encroaching on our physical environments, so too were computers encroaching on our mental lives. TIME’s cover story on January 23 features a painting by Boris Artzybascheff, well known for his Gieger-like man-machine cyborgian creations. In this instance his painting is Harvard’s then new Mark III, built for the Navy’s Bureau of Ordnance. The cover caption reads, “Can man create a superman?” The article, which follows in the science section, is headed by the title, “The Thinking Machine.” After recounting the imposing size and speed of these devices, the writer was unable to resist the quip that the Mark III “at work… roars louder than an admiral” (55). The article quickly launches into the philosophical implications of “cyber shock” citing TIME’s review of Norbert Wiener’s Cybernetics one year earlier. Wiener is foregrounded pointing out the similarities between our natural brains and these artificial ones, explaining that the creative potential of our inventions is handicapped simply by the fact that these devices have no sensors (eyes and ears) or effectors (arms and legs). Why shouldn’t they, he asks, as they usher in the second industrial revolution? Howard Aiken is cited, predicting that these mechanisms “should be able to forecast economic rain and sunshine,” (59) but expressing doubts about their potential for creativity, claiming that no machine will ever likely have imagination; they “are mere tools that do only what they are told.” (56) Warren McCulloch is rallied to the side of the “machine-radicals,” with the retort that the human “brain is actually a computer, and very like computers built by men.” He explains that the brain is built from “living electrical relays, comparable to the relays and vacuum tubes in the machines.” It “is a lacelike network of relays and conductors.” (56) Claude Shannon, “thinks that one could play well enough to beat all except the greatest chess masters.” (59)
The editors of TIME were clearly into provocation. The fusion of men and machines, so aptly visualized by their cover and tag lines is repeated: “Computermen point out that the human brains and machines speak basically the same language.” (59) But they caution, “Some philosophical worriers suggest that the computers, growing superhumanly intelligent in more & more ways, will develop wills, desires and unpleasant foibles of their own, as did the famous robots in Capek’s R.U.R. (Rossum’s Universal Robots).” (59-60) These philosophical questions are revived today in the the fields of artificial life, artificial societies, artificial culture, and evolutionary computation.
The machines of the time were largely dedicated to military work. When any time became available to outsiders, it was priced at $300 per hour (1950 dollars - roughly $2000 per hour adjusted to the consumer price index). It is hard to imagine that the authors had in mind the proliferation of computing power today when they wrote, “Computing machines are very expensive at present; Mark III cost $500,000. But they are becoming simpler, as well as more intelligent, and their cost can be cut enormously by commercial production methods. It is almost certain that they will come into wide use eventually.” (59) It is precisely this process of commodification that has given us the opportunities for simulation and modeling that we have today.
Current microprocessor technology began with plans to build a business calculator. Busicom, a Japanese firm, approached Intel for the design. Intel suggested integrating most of the processing on a single chip and set out to build a product that would outperform the IBM 1620, which would have filled a living room. The chip, designed by Fedrico Faggon in 1971, was named the 4004 and is now a collector’s item. The chip and calculator were a technical success but the calculator business was a failure. Intel repurchased the rights to its own invention, and turned the disaster into a product line that catapulted them to the position of the most successful manufacturer of microprocessors. Integrated circuit technology had taken off years earlier. Microprocessors now began to ride a curve of exponential growth. Earlier, in 1965, Gordon Moore predicted that the number of transistors printable on an integrated circuit would double every 18 months. It has done just that through to the present. (Intel 4004)
Where this curve will take us in the future is uncertain, but clearly it will move beyond the desktop computer and graphics user interfaces (GUIs). Our desktop computers are severely challenged in their abilities to develop lifelike human properties and we have only ourselves to blame. What knowledge do they have of the world when their sensors are limited to keys to press and balls to roll? What influence do they have on it when they can only excite phosphors on a screen, impress dots of ink on paper and shake a speaker cone? Their operating systems keep them in an idle state waiting for our tactile stimulation. We have them perform tasks that are mind numbing, repetitious. They do not learn, they do not evolve, because we prevented it. They have no self-awareness because we gave them none. But the situation is changing. Desktop computers can be readily connected to the analog outside world with sensors. DNA computing is already a reality. Research is being done on the prospects of quantum computing. Hybrid circuits integrate living wet and squishy neurons with silicon circuits creating cyborg brains. We don’t know where this all will take us.
Computation and Evolution
Jay Forrester expressed the relationship between evolution and computation well in 1971. Not only did he imply that the human mind was a naturally evolved computer, he insisted that the software and hardware inventions should take over much of the thinking of our minds. In an article entitled the “Counterintuitive Behavior of Social Systems” he explained:
One often hears the sentiment from computermen that any philosophers worth their salt have long since transferred to computer science departments. Marvin Minsky, himself a noted philosopher of artificial intelligence, in a 1996 public lecture kicking off the Artificial Life 6 conference in Nara, Japan, succinctly articulated the unprecedented importance of computation, while candidly observing that philosophers with no knowledge of the field have little to offer in the way of guidance for the future:
The idea to carry away is that Computer Science is not about computers. It’s the first time that we’ve begun to have ways to describe the kinds of machinery that we are. We may not be exactly right because we started just 50 years ago, but I think ours is the right direction. It will help us understand what we are.
“Were they reborn into a modern university, Plato and Aristotle and Leibnitz would most suitably take up appointments in the department of computer science.” So reads the dust jacket of ANDROID EPISTEMOLOGY, a compendium of articles “exploring the space of possible machines and their capacities for knowledge, beliefs, attitudes, desires and action.” (p xi.) The last article poses the question, “should thinking machines be granted rights.” Minsky answers with a dialog between two alien intelligences who debate whether humans should be granted rights.
Those interested in a flourishing philosophy of computation should read Von Neumann’s THE COMPUTER AND THE BRAIN, Paul Churchland’s THE ENGINE OF REASON – THE SEAT OF THE SOUL and Patricia Churchland’s THE COMPUTATIONAL BRAIN. Donald Knuth expressed the idea irreverently in WIRED: “Science is what we understand well enough to teach to a computer. Art is everything else.” Computational artists might disagree.
Science is the practice of building increasingly reliable, comprehensive and leveraged representations of the world. We are all in the habit of building representations in our minds and in our works - it is simply in our nature. In a Zen-like meditative state of nature we can attempt to let some go, but never all of them and rarely any of them for very long. They are embedded in different media, some internally in the sense that their medium is the mind itself, some externally, both in material manifestations and transitory ephemera. Representations are literally “re-presentations,” transforming one presentation of the world into another in a different medium. Each representation has its own subject content, its own unique qualities, potentialities and limitations, and each has its own audience in the sense of where it will be represented next. We often sort ourselves by the medium in which our representations will take place. Fine artists specialize in 2D and 3D visual, tactile and multimedia models, performance artists in scripting and enacting experiences on stage, film and on the screen. Literary types traditionally pen one-dimensional linear sequences of discursive narratives in ink on paper. Engineers build 2D drawings and 3D physical models, testing them while scaling up to industrial dimensions.
In the social sciences, we think, observe, and listen, but most of all we talk and write. We publish or we perish, composing articles, reports and books, again with ink on paper. As social scientists, we fall prey to the same limitations we observe in the objects of our studies. When we do turn to our computers chances are we use them to capture text for publication. We are slow to see the opportunities to avail ourselves of new technologies in innovative ways. Many times we are content to simply replace that ink on paper with colored phosphors on a screen. We largely publish text, mediated by computers, but don’t make full use of their capabilities.
Marx was speaking specifically of the French Revolution when the leaders of the new regime donned themselves in the garments of the classical period. Such holdovers in stylistic form, which are no longer attached to their original context or function, are called “skeuomorphs” by archaeologists. From the perspective gained by two centuries of distance, the theatrics of the French Revolution seem tragic but almost comical. Yet for security we still fall back on what we know. If the social sciences cannot free themselves from the yoke of natural language they can be excused, in part, by the knowledge that we all fall into this same trap. In the cutting-edge fields of evolutionary computation, people argue about the relative efficiencies of cultural versus genetic algorithms in solving problems. It was recently claimed that since cultural evolution finds solutions to new problems much faster than genetic evolution in the natural world, it therefore follows that a cultural algorithm should do its work more quickly than a genetic algorithm in a machine. The logic, although clear, is faulty. The reason culture works more quickly than biology in the natural world is that the medium in which the two pass on their solutions is qualitatively different. Consequently the rate at which generations of new ideas and organisms are produced is vastly different. Let’s say, for the sake of argument, that ideas regenerate, generously, at one per second and that children reproduce every 15 years. That would mean that culture inherently creates new solutions 473,040,000 times quickly than biology. This reasoning is appropriate for the natural world, but in the artificial world inside a computer, operating in the medium of silicon, there is no need to handicap biology by instantiating the slowness of its genetic operation in the natural world. (Gessler 1998.) It’s a new world inside silicon. Nothing is the same. We cannot afford to ignore new things in the social sciences, especially when that newness began fifty years ago.
Computer modeling is yet another representational skill whose exercise would benefit our teaching and research. We are half a century behind exploring the cutting-edge technologies developed in the 1950s. We can no longer say, “we can’t afford it.” Certainly, computists in those days had the benefits of big-science funding, but we have many times the power on our desktops now. We can no longer say, “Programming is too difficult.” Visual and rapid application development environments allow us to get directly to the task of translating the logic of our ideas into working applications. We no longer need to hone the skills of “seeing” patterns in large datasets of raw numbers. We can visualize them as colors and textures in multidimensional spaces, sonify them as rhythms and melodies or translate them to a variety of uncustomary sensory modes. These aids in understanding applications are important in their building too. We can use them to see and hear the behavior of the code we write, enabling us to detect peculiarities in its operation in real time, opening new avenues for correcting our software.
Ed Fredkin, a physicist from MIT and researcher of cellular automata, argues that the world is a universal cellular automata computer and that we, and the objects and processes that surround us, are all programs running on that underlying machine. (Wright.) This is not meant to diminish our self-image by suggesting that we are “merely” programs like the simple agents in today’s simulations. Just as those simple agents have no understanding of the machines on which they run, we, as humans infinitely more complex than they, stand in the same relationship to higher reality. The external world, the ultra-supercomputer, stands beyond our imagination. If this is so, then how are we to acquire any knowledge of its operation? The question parallels the same question traditionally asked by philosophers of science, but reframed in computational terms. What we perceive is a reality comprised of representations compounded one inside the other like a set of nesting Chinese dolls. We live inside a cluttered hallway full of mirrors, some obscured, some warped and rippled, some reflecting others, and by means of these mirrored images we try to see outside. Granted, mapping social interaction is like trying to map the architectural plan of a hall of mirrors, not objectively floating up above, but subjectively feet on the ground face to face with the mirrors themselves. Each of us tries to discern what is in the other’s mind, and each of us tries to manipulate how we are perceived. “I thought that she thought that he thought that you…,” we quickly exhaust our ability to compute the growing tree of possibilities. We learn from our mistakes, from the flaws and illusions we discover in the system.
It is unfortunate that we, as scientists, don’t write more speculative fiction, for it would open a window to our imaginations through which our readers might catch glimpses of our visions and our inspirations as well as our doubts and joys. There are two notable examples that echo Fredkin’s story, exploring the entailments of creating nested simulations.
Stanislaw Lem, a Polish writer, appears to be the first to describe and critique a research program in computational cultural evolution. In 1971, in a volume called A PERFECT VACUUM, he published a short story entitled “Non Serviam.” The story passes as a review of a nonexistent book by a nonexistent author named Professor Dobb. The Latin title, translated into English as “we shall not serve,” refers to the conclusion of a theological discussion on the part of artificial agents. It was the answer to their question, “if some intelligent being created us, do we owe him any service?” The professor, after having created numerous artificial cultures in an academic institution, reflects on the course of his research and laments the loss of funding requiring him to pull the plug.
Lem has written more extensively about the evolution of such simulations in his large book SUMMA TECHNOLOGIAE (1982), a play on the Summa Theologica by Thomas Aquinas. It has never been translated into English but there are editions in both Polish and in German. Consider the artistic decisions that one must make if he were ever to translate this story into film.
A comparable project has been undertaken by Alex Proyas, Lem Dobbs and David Goyer when they revised an earlier screenplay and released it as the film DARK CITY in 1998. Roger Ebert referred to it as the best film of the year. In this dark vision alien scientists experiment on human subjects altering their memories and the architecture of their city. John Murdock, one of the subjects of their experiments, wakes up to realize he’s a puppet on a string. He is wanted for a murder which he cannot remember. The police, also subjects of the experiments, are investigating the case. Inspector Frank Bumstead has been given the case to replace Eddie Walenski, who has apparently gone mad. Eddie says the city is a spiral maze, a motif earlier reflected in a rat maze supervised by Dr. Daniel Schreber, a human who has collaborated with the aliens in devising the experiments.
Bumstead asks what happened to Walenski to make him quit the case:
Dr. Daniel Schreber, who narrates the story, explains the city’s mysteries:
Another film focusing on simulation and the problem of knowing the nature of the real world is THE THIRTEENTH FLOOR, where there is a murder to be solved, a suspect and a detective. Josef Rusnak and Ravel Centeno-Rodriguez created a subtle story of mystery and intrigue wherein a simulation is created, ostensibly as commercial venture, but misused by its creators for sex and entertainment. Detective Larry McBain questions Whitney, a senior researcher, about the murder of his boss prying into the secret work they do in their lab:
Ashton, living in the 1937 simulation, has opened a letter intended for president of the company in the real world and he confronts another senior researcher, Douglas Hall, who is visiting Ashton’s world, with the alarming news:
The Thirteenth Floor
Later we find that McBain was almost right. The consciousness of Ashton has managed to escape the simulated world of 1937 and now finds himself in Whitney’s body. Ashton has moved up one level to an unfathomable world, which seems to be run by gods. Again he confronts Hall, but this time on Hall’s turf.
Whereas DARK CITY overtly explains what is really going on, THE THIRTEENTH FLOOR leaves much of the subtlety of the story unexplained. The nuances of the acting in THE THIRTEENTH FLOOR can only be appreciated once you know who is really whom, and this won’t really hit you ‘til the end. Rusnak’s film is quite rich in ideas. Fiction such as this suggests challenges and solutions to those engaged in simulating artificial culture. We turn next to simulations themselves.
The best known of all these imaginative challenges is the Holodeck of Star Trek fame. The Holodeck is a totally immersive virtual environment complete with sentient characters who could be based on real-life personalities and physical places complete in every detail. In fact, the virtual nature of this artificial world was so real that all the characters and objects had physical presence. The Holodeck could be a dangerous place: an altercation with computer-generated character could be harmful and a virtual bullet shot from a virtual handgun could kill. The Holodeck has become the crucial metaphor for a $45,000,000 project by the Department of the Army and Paramount Studios at the new Institute for Creative Technologies in the Marina del Rey.
In all these fictions, as in the real-life practices on which they are based, simulations are used for problem solving, enlightenment, recreation, love and sex. The most compelling simulations are those that mimic evolution, learning and change, those models in which order emerges on its own from an unordered set of primitives described at a lower level of complexity underneath. These narratives, while fiction, provide us with strong computational challenges. Can we write the software code to do the things that they suggest? That is the open challenge.
Artificial culture (Gessler 1994, 1995) is a place in space situated amidst three axes. Each axis expresses a different mode of complexity: intellectual (individual and cognitive), social (populational and heterogeneous) and environmental (natural and artificial). Since there are limitations on what we can capture in a model or a simulation, represented by the volume circumscribed by these three axes, we must decide how much of our limited computational resources to devote to each of these three domains. Simulations stressing individual intellectual complexity naturally lead to the fields of artificial intelligence and expert systems. Those investigating large populations of diverse but cognitively impoverished individuals with high degree of social complexity lead to artificial life. Those with rich environmental complexity having few thinking inhabitants lead to virtual environments, and augmented realities. Artificial societies are situated somewhere in between, borrowing theories and methods from all three. Artificial culture, which has yet to be fully realized, extends the program of artificial societies by adding richer modes of thought, richer social interactions and more richly constructed human spaces. The philosophy of artificial culture has yet to be addressed, but it is not difficult to see beginnings in Margaret Boden’s THE PHILOSOPHY OF ARTIFICIAL LIFE.
Our models need to be concerned with both computational effort and cognitive load. These translate roughly, into the relative difficulty of computing or reckoning any particular idea. In AN INTRODUCTION TO NATURAL COMPUTATION, Ballard refers to this as:
We cannot model the entire world from quark to quasar, either in our heads or in our simulations. What we can do however is to model it in parts. By optimizing these parts we can produce a compressed (lossy or lossless) model that may suffice. Emotion, once considered alien to machines can now be envisioned and encoded in machines as Picard demonstrates in AFFECTIVE COMPUTING. We might consider emotion to be highly compressed: countless positive experiences, the details of which escape our memory, are subsumed by an immediate and overwhelming sense of friendship, trust or love. Just as there are limits to just how much we can put inside a computer and how quickly a computer can compute it, called computational effort, there are limits to just how much we can remember in our heads and how quickly we can think a problem through, called cognitive load. These limits are what motivate us to turn to computation as a new way of knowing, but in the process we come to realize that these same limitations exist in the minds of the people that we study. Consequently, we have to think about cognitive load in the natural world of culture and how people came to deal with it, and we have to think about how to model strategies of dealing with cognitive load as computational effort inside our simulations.
The realization that “the world is the best representation of itself” is a step to crumble the constraints imposed by cognitive load. We can move some of these cognitive routines outside our heads and instantiate them in the world outside. Cognition can then be delegated to either artifacts or other people. As an artifact, a road sign may be placed to mark key turning points in a route to work each day. In a wilderness with no signs posted it is only necessary to remember key landmarks and to rely on the world, as it passes by, to fill in the missing gaps. Can you remember all the details of a daily drive: the landmarks and the signs? At what level of detail does your memory fail? When you drive the route again how much more than you recall do you recognize along the way? How often, when your mind is elsewhere, do you drive on autopilot and miss a turn because a landmark that you passed led you down an old familiar route rather than the one you should have taken? Even in smaller quarters, the arrangement of living and working spaces provides a shortcut for knowing what is where and what is to be done next. In addition to cognitive artifacts, cognition is clearly distributed among other individuals. Each performs a specialized cognitive task in a specialized cognitively enriched space. Edwin Hutchins book, COGNITION IN THE WILD, is a study of spatiallydistributed thinking, something that needs more investigation in the social sciences.
The notion of distributed cognition may be applied not only to mind distributed across other individuals and other objects, but also to the nature of the individual mind itself. Marvin Minsky discusses a number of possible types of cognitive agents all resident at one time in the individual in his SOCIETY OF MIND. The mind, he suggests, made up of a variety of modular agents, each on thinking about the task it has before it, each one negotiating with its neighbors in a manner analogous to the way members of a society interact with one another. The agents vie for influence by forming alliances and voting. Perhaps “society of mind” is not simply a metaphor. Perhaps it is more than an analogy, a similarity due to factors other than a common origin. Perhaps it has a homologous relationship, being similar in origin with the social life that structured it. Insofar as each of us has theories of mind, models as to how many other people think, it would not be surprising to suppose that those modules might be related to the agents that Minsky suggests. After all, mind did coevolve with culture; certain patterns of organization raised the fitness levels of certain patterns of thought, and vice versa. Evolutionary psychology carries the concept further claiming that the modules of mind are themselves highly specialized computers running programs that coevolved with one another and with the conditions our adaptations as hunter-gatherers during the Pleistocene epoch, such as cooperation and cheater detection, mate choice and spatial orientation.This rich array of cognitive specializations can be likened to a computer program with millions of lines of code and hundreds or thousands of functionally specialized subroutines. (Barkow 39)
My hunger for advances in computational modeling stems from an annoying absence of intradisciplinary unifying theory in anthropology, archaeology and geography, theory that might make each a cohesive discipline, and also from a deficiency of interdisciplinary theory, theory that might unite the three. I hunger for an overarching transdisciplinary theory that might bind the “soft” social with the “hard” physical sciences with a common framework. Systems theory, inspired by earlier days of computation aspired to do just that. James Miller compiled a massive volume on LIVING SYSTEMS in 1978, detailing hypothetical commonalities in structure and process across seven levels of emergent organization, from the cell, organ, organism, group, organization, society through the supranational system. It was an ambitious undertaking then, one in need of revitalization today.
Building on precedents set in artificial life and artificial societies, artificial culture should add intellectual, social and environmental complexities to the world that it creates. It should enrich the societies of minds of its agents with other than Western ethnocentric ways of thought drawing its inspiration from minds from different cultures across the world and back into time. It should add a larger repertoire of social interactions based on the exchange of goods and information, information about goods and information, and information about the quality of the foregoing. It should add agents who manufacture artifacts and architecture to create a culturally transformed natural environment. Artificial culture make possible navigating nested representations and conflicting logics in differing contexts, realizing that “an enemy of my enemy is not necessarily my friend” and that “loving, making love and being in love” are not the same experienced first hand, read in literature or viewed on the screen. Artificial culture should be ubiquitously multi-agent, scaled from the bottom-up in an emergent, evolutionary and causal sense. It should be the synthetic complement to the analytic method. Once a system is analyzed and decomposed into its constituents and those parts understood in isolation, they should be reconstructed as a larger whole, a synthesis. Top-down causation is not denied, it rather needs to be explained by the bottom-up climb that took it to its top-down heights. Finally, artificial culture should be minimal. It should leverage the most explanatory power from the least complex formulation. What is, and what is not, artificial culture is based on an intuitive judgment of what constitutes a satisfying and holistic mix of primitives, the proper repertoire of raw materials from which to assemble novel interactions.
John Holland’s ADAPTATION IN NATURAL AND ARTIFICIAL SYSTEMS is a classic in evolutionary modeling with broad-spectrum applications in the social as well as biological sciences. In it, Holland introduces Echo as a gedanken, or thought, experiment designed to give qualitative insight into the evolution of a multiagent spatial system rather than to yield precise predictions for any specific situation. Agents in Echo wander across a geographic world encountering other agents in search of resources necessary to survive. A pair of agents is selected from a local group and offered the opportunity to engage in combat, trade and mating, in that order. Each acquires its fill of resources from its locale. Each agent is then charged a subsistence fee and deleted if it cannot pay. The agent’s reservoir of collected resources is then tested; if it has enough it replicates. An agent missing elements necessary for reproduction then migrates to a site richer in what it needs. The building blocks of Echo are quite minimal indeed, yet they provide a conceptual foundation for more lifelike models of cultural interaction. (Holland, 1995pp. 194-198.) Holland revisits his Echo model again in the context of EMERGENCE – FROM CHAOS TO ORDER (1998) along with his personal thoughts on how an interdisciplinary science should be done.
ARTIFICIAL SOCIETIES – GROWING SOCIAL SCIENCE FORM THE BOTTOM UP, by Josh Epstein and Rob Axtell, is a milestone publication, probably the closest to that ideal generative mix in scope and scale that is likely to inspire models of evolving culture. (Gessler 1997.)
The authors target economic theory but their work has broader implications for the other social sciences and anthropology which rely less on formal language models of interaction. The world they offer us is Sugarscape, their CompuTerrarium, a caricature of proto-history inhabited by agents who can extract resources, trade by negotiating the prices of exchange, cooperate and quarrel, engage in sex, do combat, kill, rob and catch diseases. Each may have a different vision metabolism, cultural membership and character. Between each pair of agents are measures of affinity and friendship and memories of trust. Although it is easy to dismiss this all as a cartoon, it defies discursive social scientist to create something better. Their take home message is that the canonical ideas of economics don’t hold up under the more realistic assumptions of artificial societies: heterogeneity in agents’ thoughts and actions (not normative aggregate populations), spatial geography (not simply point kinetics) and the bottom-up emergence of social order (not the top-down imposition of ideals).
Robert Axelrod, in his book THE COMPLEXITY OF COOPERATION, presents “a model” of culture based upon three principles: agent-based interaction, no central authority, and adaptive rather than rational agents, to show how global polarization can be generated from the convergence of local social influences such as beliefs, attitudes and behaviors. These three constituents of culture are represented in the simulation by five abstract features, each of which taking on one of ten traits. These bare-bones cultures inhabit the cells of a small grid. One is picked at random along with a neighbor, with a chance of interacting in proportion to their similarity. If there is interaction, the picked cell takes on one attribute of the neighbor and thus increases its chances of sharing traits in any future interchange. The similarities and differences among the cells are dynamically mapped on the spatial grid as each cultural cell is repeatedly interrogated. Stable regions increase with the number of traits per feature and decrease with the number of features added, the range of interaction and the size of the overall territory. Although Axelrod ignores the epigenetic interactions among cultural traits and features, he introduces these and other interactions with the environment as extensions, challenges for future work.
Steven Grand was among the first to turn an evolving community of artificial life creatures into a viable and entertaining product. Amidst all the presentations at the first conference on Autonomous Agents, which included applications in engineering, commerce, industry and a demonstration of prototypes for the Sony Aibo® robotic dog, Grand’s CREATURES won the acclaim of the post conference evaluation panel as being the best most memorable work. As the creator of this world, Grand felt obliged to create an entire ecosystem for his bright-eyed Norns and other critters to flourish in. There is minimum specification for nurturing the whole of intelligence or life, which he jokingly refers to as “the whole iguana.” A creature must have a brain of neurons whose connections can be reinforced for remembering important things and reused for forgetting unimportant things to make room for the new. Each brain must learn on its own and focus on the most situationally relevant objects in its environment. Each must be tightly linked with its body, to goals and emotional qualities like pain, hunger, sleepiness, exhaustion, boredom and sexual attraction. Survival in its environment is paramount, so it must learn to eat healthful foods and avoid toxic substances. It must learn to avoid creatures carrying disease and must have the benefit of an immune system to combat infection. It must have the ability to learn to speak, and if not taught English must invent a language of its own. On reaching sexual maturity it must be able to court, conceive and have kids. Grand achieved all that through what he calls “God’s Lego® set” of feedback loops.
Lego®, as a result of collaborations with the Massachusetts Institute of Technology has made even more components available to anyone who would like to create artificial life, societies or culture. In addition to the usual complement of plastic parts, motors, lights and switches, they now include a yellow fist-size brick. The brick is a completely programmable microcomputer, which interfaces directly with the outside world through three sensors, three actuators and an infrared communications link. It is called the RCX microcontroller brick. The device is sold as the
Mindstorms™ Robotic Invention System. The bricks are programmable from a PC with visual puzzle pieces standing in for program statements, which can be dragged-and-dropped to write an application. Once programmed the bricks can reprogram one another or collect and exchange information. This makes experiments in robot cooperation and competition possible. It is also feasible to use them to log scientific data or mediate and track exchanges, as wearable computers, among real human actors taking part in live simulation. Resources for researchers are growing, including new operating systems and programming languages like Not-Quite-C (NQC). There are already several books available including definitive and advanced guides (Baum).
The truism “the world is the best representation of itself” inspired Rodney Brooks to create a subsumption architecture for his robots, a shallow hierarchy of agents with externalized cognition, and to write two provocative articles intriguingly entitled, “Intelligence Without Reason” and “Intelligence Without Representation,” reprinted in CAMBRIAN INTELLIGENCE – AN EARLY HISTORY OF THE NEW AI. Both were challenges to the received wisdom intellect can only be instilled from top-down. Brooks also coined the slogan, “fast, cheap and out of control,” as a hook for his proposed robotic invasion of the planets. He envisioned a hierarchical community of robots, hundreds of cheap explorers at the bottom of the heap, specialists and managers of different ranks in the middle, and a few expensive overseers at the top, as an exploration team for the planet Mars. Darin Morgan’s, WAR OF THE COPROPHAGES, an episode of the X-Files, was a light-hearted satire of Brooks’ scenario. The phrase FAST, CHEAP AND OUT OF CONTROL was also used by Errol Morris as the title for his bizarre feature documentary of Brooks and three other men, all specialists in creating uncustomary kinds of life.
Social robotics is a specialty that shares elements with artificial culture. Echoing the limits of cognitive load and computational effort, the caveat “keep it simple stupid” (KISS) is an essential caution to designing both multiagent simulations and autonomous robot communities. Robots, surviving in the wild where computing power is at a premium, must confront new situations and cooperate with others of their kind in a hostile world of rain, dirt, cold, dwindling energy and constrained time. Roboticists are forced to deal with the same scale and class of material, multiagent organizational and behavioral constraints as social scientists in building models of communities. Moreover, the physicality of robots force their makers to deal with material embodiment, situated action and materially and socially distributed cognition, all aspects of critical interest to social science.
The idea of evolving hardware as well as software to enhance the adaptability of a robotic community has been promoted by Sarita Thakoor in her several conferences on biomorphic explorers held at NASA’s Jet Propulsion Laboratories in Pasadena. Here researchers came to share their insights into building small reconfigurable robots. Small flying, walking, creeping, web-building and burrowing robots were planned on the scale of insects. Again a major theme was the cooperative behavior and independence of the community required to survive on other planets, in ocean depths, in Antarctic cold and near Chernoble radiation. Reconfigurable robotics is not the only instance of evolving hardware. Field programmable gate arrays (FPGAs) were aboard Pathfinder and Sojourner on their mission to Mars. The hardware logic circuit elements on these chips can be evolved as easily as software. We shouldn’t be surprised to see computers on the market which will first look at a problem, decide what community of microprocessor types they should become in order to compute its solution and then transform themselves accordingly. EVOLUTION ARY ROBOTICS – THE BIOLOGY, INTELLIGENCE AND TECHNOLOGY OF SELF-ORGANIZING MACHINES takes us on a tour of this field, barely a decade old (Nolfi).
Robot sociology, or more appropriately robot culture, since robotic minds are much more alien and other than the Western minds that are the object of sociology, shares many of the same aims as artificial culture. This convergence is evidenced by a recent conference on EPIGENETIC ROBOTICS in their call for papers.
Creative evolution relies on the novelty of epigenetic interactions, interactions which themselves rely a varied mix of primitive building blocks. This variety may be richly enhanced by combining elements of the real world with software simulations. In "The Evolved Radio and its Implications for Modeling The Evolution of Novel Sensors," Bird champions epistemically autonomous devices, citing examples from his work evolving hardware FPGA sensors. He argues that a software simulation can never fully capture the infinite real world of possibilities. What he claims is true for evolving sensors and devices is equally true for evolving artificial cultures.
Just as epistemically autonomous devices stand midway between software and natural evolution, autonomous robotic cultures stand between artificial and natural cultures. There are other middle grounds that include real living human players in experimental evolutionary simulations, optimizing the richness of both real and synthetic worlds. Non-human neurons have been recruited as the brains of several robots (Ayers) and there have been significant advances in growing human neurons on microchips, the neuron-silicon interface. Cyborg mediated culture seems far off and the "soul-catcher chip" remains apocryphal lore.
Warfare is a cultural activity. It is arguably unnecessary as we enter the age of “Neocortical Warfare – The Acme of Skill” (Szafranski), but it is still a unique facet of human interaction where “getting it right, and quickly” is at a premium: “seconds to decide and no second chance,” to quote the tag-line on an inside front cover advertisement in MILITARY SIMULATION AND TRAINING (MS&T), a leading journal dedicated to immersive decision-rendering-under-duress computer simulation applications for the defense community.
STRICOM 1995 page 93
STRICOM, the US Army’s Simulation, Training and Instrumentation Command, under the accompanying logo of a soldier at the center of three concentric circles and the quizzical motto “All but war is simulation,” considers three components to be central to creating a synthetic training environment:
This integration has yet to be achieved in social science for even peaceful facets of cultural activity, yet its relevance to our work is inescapable. The true test of any model is its ability to unite theory (purely computational experiments), with human players engaged in the entailments of that theory (automated human-artificial interactions), with human actors grounded firmly in reality.
In a VIP brief on STOW, the Army’s Synthetic Theater of War, the program manager, Rae Dehncke characterized a shift in the simulation, theory and practice of combat in the last half century as follows (STOW 1998, slide3):
This shift foregrounds entity over aggregate levels of simulation, analysis and command. That is to say that it focuses on tactical and strategic planning from the level of the individual agent on up, rather than on the aggregate level of the squad, platoon or company on up. Again, this shift tracks an analogous shift in the new sciences of multiagent spatial simulation.
The Institute for Creative Technologies (ICT) inaugurates a new collaboration between the US Department of the Defense and the entertainment industry. In a new twist on the concept of the military-industrial complex, a military-entertainment complex has formed between the Army and Paramount Studios, producers of the Star Trek series. The Holodeck, in which anything imagined could be brought to life in simulation for entertainment, science and sex, is much more than a metaphorical goal of this joint project. Chuck Drisafulli summed up the new collaboration:
With that goal in mind, the Creative Technologies facility was launched last August with a $45 million grant from the U.S. Army. There, specialists from academic and entertainment backgrounds pursue open-ended research that will eventually develop military, industrial and entertainment applications. (Chrisafulli)
One of the showpieces at the ICT is the Mission Rehearsal Exercise, or MRE (ICT www). A lieutenant trainee enters an approximation of the Holodeck, in this instance a small theater. His mission is to lend support to a team securing an armory with an unruly crowd gathering outside. He is interrupted in this mission when he encounters an accident between a Humvee under his command and a passenger vehicle. A young boy, a passenger in the car lays unconscious in the street; his mother kneels beside him on the verge of panic. The lieutenant needs to interview the sergeant on the scene to decide what action he should take. Through a mixture of technologies, including projected virtual reality, speech synthesis and recognition, scripted plots and dialogs, emotional algorithms, plan generation, artificial intelligence, and branching story lines, the narrative continues through to a range of outcomes for overall mission and the injured child. The automated actors perceive and interact with one another: squad leaders carryout their orders while crowds gather and the media arrives. Peacetime scenarios are also in development, including one on the strategy and tactics of natural disaster recovery. Eventually, their intention is to merge theater with a suite of similar technologies such as actor position and gesture recognition, physical vehicle simulators, and flat screen projections in a theatrical set. The bottom line is to approximate the Holodeck with consumer-off-the-shell (COTS) products available from the mass market, and to return some of the products of their research to the market as general-purpose human negotiation games.
In return for the inspiration the ICT gained from the world of scripted and interactive fiction, it plans to return some of the results of its research, initially in console computer games. It has contracted to develop two consumer market games: C-Force (Future Combat Systems) and CS-XII (Quicksilver Software). The games are expected to teach any player, military or civilian, how to leverage and negotiate human plans, resources and information, skills that should be helpful in any walk of life.
The coevolution of mind, body and behavior has convincingly been demonstrated by Karl Sims with his virtual creatures evolved on a Thinking Machines massively parallel Connection Machine 2. The worlds he created began with an artificial physics, a primordial soup of elemental shapes, morphologies, articulations and sensors, neurons and effectors, and the mechanisms of evolution through natural selection. Creatures were evolved, selected for different behaviors by giving higher fitness values to those who could swim, walk, jump and follow fastest. Other species of creatures were evolved to take control of a cube during a hockey-like face off. Different lineages developed different strategies and the best from each lineage was pitted against the best from another species in a final competition, which was rendered as a silent moving visualization. The resultant interactions are disarming and surprising. The creatures each have personalities so endearing that audiences routinely regard them as they would cute and mischievous pets. As compelling as this work is in exhibiting the power of evolution, it is equally troubling in trying to understand the cognitive structures. Speaking of the brain for an evolved reciprocally counter rotating paddle swimming creatures he writes:
David Fogel presents an absorbing and enchanting tale of a personal quest for the deeper meaning of AI: the discovery of how intelligence itself arises. Fogel seizes the challenge by capturing the evolutionary process and shaping it to breed a checkers expert from an artificial neural net despite many obstacles intentionally thrown its way. His book BLONDIE24 is an inspiring, clear and witty narrative of the growth of a synthetic sentience inside a desktop PC. Fogel comes to the same conclusion as Sims. He cannot tell you exactly how it works. It learned its expertise entirely by playing checkers.
John Koza has shown the same enigmatic phenomenon in programs evolved to manage human tasks through genetic programming (GP). Although the program solutions are often elegant, the programmatic steps through which the programs evolved to produce them often defy deconstruction and human understanding, refusing to be reversed engineered. Koza also took on the age-old argument that computers can only do what they are told to do, and can thus never do anything imaginative or original. If computers are told to evolve solutions to problems they can be innovative and competitive with human-produced results. As evidence for his claim that GP can accomplish this, he cites ten examples of success using only one of eight criteria. That single measure is, perhaps, the most interesting of all, recruiting the expertise of the U.S. Patent Office as the final arbiter:
What are these mind-building, mind-boggling, algorithms that defy decomposition? Space really does not allow a thorough explanation other than to say that they, invoking evolution, are all extremely powerful creative and imaginative processes. In an edited volume dedicated to the pioneers of evolutionary computation, Fogel presents a balanced historical perspective on its origins in the 1950s through a series of landmark papers in the field (Fogel 1998). A ten-pound definitive big book on the subject, the HANDBOOK OF EVOLUTIONARY COMPUTATION (Bäck), provides extensive background to the methods, theories, applications, prospects and personalities of the major schools of practice in this field although a smaller introduction may be more accessible (Fogel 1995). This is a quickly changing field and the reader is encouraged to search the Web and amazon.com on the following keyword phrases to turn up a host of books, conferences, proceedings and papers on these subjects.
A Lego® Cultural Invention System Challenge
The components of artificial culture, with a nod to Mindstorms™, are easily within reach of everybody with a desktop box and a research inclination. They may be taken from discursive theories already in the literature, advanced applications like the ones I have reviewed, a software platform for developing ideas and access to some of the more common programming functions. I have mentioned several models embracing different facets of cultural evolution from the academic, industrial, military and entertainment sectors as proofs that these all exist in separation. The challenge for understanding HUMAN COMPLEX SYSTEMS is to unite aspects of these with discursive social theories in an appropriate mix. A new peer-reviewed Web publication, which encourages collaborative transdisciplinary work, is the JOURNAL OF ARTIFICIAL SOCIETIES AND SOCIAL SIMULATION. It is an excellent resource for new ideas and models that cross distinct domains in social science.
Among anthropologists, cultural evolution finds theorists among authors interested in systems integrating material embodiment in technology, situatedness in social and physical environments and epigenetic processes of interaction among subsystems. Owen Lovejoy, in “The Origin of Man argues that the prime requisite for culture arose from man’s unique sexual and reproductive behavior. The other interlocking threads in that scenario are an expanding neocortex, bipedality, a particular dentition and material culture. Marvin Harris has repeatedly championed the cause of building a unified scientific theory. His influential RISE OF ANTHROPOLOGICAL THEORY is an account, which optimistically chronicles an anticipated renaissance in scientific ways of knowing culture. This was followed by his detailed strategy of CULTURAL MATERIALISM, subtitled THE STRUGGLE FOR A SCIENCE OF CULTURE. Science has not enjoyed the place in anthropology that was expected. Postmodernism’s influence was to privilege subjective interpretation over the quest for any form of objectivity. Thirteen years after the publication of his book on the “rise” of theory and “after three decades of intellectual warfare down among the anthropologi,” (Harris 1999, p. 13.) he published a new supplemental volume, tempted to call it “The Fall of Anthropological Theory,” but tempered to THEORIES OF CULTURE IN POSTMODERN TIMES. Harris, though a cultural materialist among cultural anthropologists, is not sufficiently materialist to offer detailed insights on material culture, the artifacts and architectures that provide the only evidence of early cultural evolution through archaeology. In archaeology, Lewis Binford has been equally influential in promoting the “processual explanatory imperative” which in the study of culture was the core of “processual archaeology:”
CULTURE AND THE EVOLUTIONARY PROCESS (Boyd) and THE EVOLUTION OF HUMAN SOCIETIES (Johnson) present much more detailed evolutionary scenarios. In much of science-oriented anthropology there are allusions to the processes of systems theory, emergence and multiagent spatial modeling. Embodiment, situatedness and epigenetic processes are implied, but not in those terms. What we need now are re-presentations of these theories as computational objects and their evaluation against empirical data.
A software platform is as essential to building models as a word-processor is to writing papers. I have avoided many preprogrammed simulation packages because, despite their initial ease of use, they lock their user into circumscribed sets of simulations. Dissatisfied, she will eventually want to break free of these limitations and revert to a mainstream programming languages in which to exercise the full creativity of her thoughts. Why invest that time in partial modeling solutions when with little added effort she can tap a world-class suite of functionality to include in her simulation? I have settled on Borland C++ Builder as that platform, a visual integrated development environment for a world-class language, though I keep an eye on Java and C#. Visual programming takes the drudgery out of writing code. Buttons, bars and boxes are dragged-and-dropped easily onto the WYSIWYG editing window allowing the programmer to ignore the arcane routines underlying the Windows operating system in order to concentrate directly on the logic of the simulation. With hands-on instruction and only a small subset of C++ and a few graphical commands, the uninitiated can build an application in a week. The graphics help to visualize the behavior of the simulations as it is developed. A little C++ prepares modelers to go on to marshal routines from other sources: algorithms, code snippets and examples published everywhere on paper and the Web.
To sum up, complexity is not easily captured by discursive, written or mathematical formalisms. Computer models of artificial culture have the intrinsic property of being truer representations of elemental cultural processes and objects than discursive representations (in spoken and written natural language). Moreover, unlike natural language, computer simulations generate unambiguous entailments. Consequently, one solution is to build the culture from the bottom-up in models and to explore their hyperspace of possibilities through simulations. It is my contention that artificial culture should be conceived synoptically as a complex of materially embodied and spatio-temporally situated agents in epigenetic adaptation to the constraints of their social and physical environments. Artificial culture so conceived should enable us to explain not just:
The social sciences remain accused of physics envy, the longing for an orderly cycle of theory building, hypothesis revision and repeated lab experiment. Experiments in academic social science are rare, ranging from role-playing live human simulations to the design of new policies of social change. All are fraught with great financial costs, procedural problems and ethical concerns, not to mention the consequences stemming from the fact that the human subjects are aware of the experiment, have their own agendas and have something to say about the outcome. Computer models allow us to avoid some of these risks: to experiment with policy in simulation until we get it right, to look inside the minds of all the actors in the game to see what would be hidden in real life, and to retrodict, predict into the past to study conterfactuals without relying on an historian to have collected data for an historical experiment he could not even have imagined. Until the time comes when we require human subjects review committees to sit in judgment over our computer worlds, we are freed from many ethical constraints.
We have no comprehensive single theory of computation:
Discursive theory in the social sciences may well be subsumed by models and simulations:
Danny Hillis, inventor of the massively parallel Connection Machine 5 prominently featured in Jurassic Park, tells the enchanting tale of the coming of Thinking Machines in THE PATTERN IN THE STONE – THE SIMPLE IDEAS THAT MAKE COMPUTERS WORK (1998). We don’t know, he says, where all this will lead.
To my mind the lesson to be taken home is clear: We should rethink theory, description and explanation in the light of simulation. We should rethink what it means to do research. We must become more fluent in the languages of representation, fluent in thinking, reasoning and communicating through programming languages.
Intellectually and physically, we stand on the horizon of the age of the posthuman (Hayles), a new human subjectivity engendered by two new converging ways of knowing: the evolutionary and the computational. We know that the thinking parts of us, and also other thinking things, have evolved through natural selection as adaptations to the world they live in. We know how to make things think. We know the creative forces that created us, and how to harness them in our machines. The future possibilities are limitless, wide open.
Bäck, Thomas, David B. Fogel, Zbigniew
Ballard, Dana H.
Barkow, Jerome, Leda Cosmides and John Tooby.
Baum, Dave, Michael Gasperi, Ralph
Hempel and Luis Villa.
Binford, Lewis R.
Bird, Jon and
Boyd, Robert and Peter J. Richerson.
Casti. John L.
Fogel, David B.
M, Clark Glymour and Patrick. J. Hayes (editors).
Hillis, W. Daniel.
ICT - Institute
for Creative Technologies.
Johnson, Allen w. and Timothy Earle.
ARTIFICIAL SOCIETIES AND SOCIAL SIMULATION
Koza, John R.,
Forrest H. Bennett III, David Andre and Martin A. Keane (editors).
Miller, James Grier.
SIMULATION & TRAINING – MS&T – THE INTERNATIONAL DEFENSE TRAINING
and Dario Floreano.
and Lem Dobbs and David S. Goyer (screenwriters).
Rusnak, Josef and
Ravel Centeno-Rodriguez (screenwriters).
Smith, Brian Cantwell.
STOW – U.S.
ARMY SYNTHETIC THEATER OF WAR
STRICOM – U.S.
ARMY SIMULATION , TRAINING AND INSTRUMENTATION COMMAND.
Taylor, Charles and David Jefferson.
and Adrian Stoica.
Paul M. Churchland, Patricia Smith Churchland and Klara Von Neumann.