In EVOLUTION IN THE COMPUTER AGE - Proceedings of the Center for the Study of Evolution and the Origin of Life, edited by David B. and Gary B. Fogel. Jones and Bartlett Publishers, Sudbury, Massachusetts (2002).

Computer Models of Cultural Evolution

Nicholas Gessler

Ideas, and other atomic particles of human culture, often seem to have a life of their own -- organization, mutation, reproduction, spreading, and dying. In spite of several bold attempts to construct theories of cultural evolution, an adequate theory remains elusive. The financial incentive to understand any patterns governing fads and fashion is enormous, and because cultural evolution has contributed so much to the uniqueness of human nature, the scientific motivation is equally great. (Taylor & Jefferson 1994, 8.)

There is a new epistemology, a new way of investigating social science and culture.  It is founded on philosophical theories of knowledge that embrace computation, culture and evolution.  Computer models of cultural evolution are no longer out of reach.  The price of a new computer has dropped; the power has increased; the difficulty of writing software has eased.  A new world of understanding waits for those who try to translate theory, represented in written texts, to ideas, expressed in working code as computer applications running on desktop machines.  The process of creating simulations forces clarity on the objects of attention and processes of interaction.  Once constructed, a cultural model written in programming language can be turned on and its entailments explored.  Vague ideas previously confined to natural language may be objectified, vetted and put through rigorous test runs.  As a more sophisticated analog to thought-experiments, narratives of different “what if” scenarios may be played out, examining a full range of given input variables and the output behaviors that result.  From these WOULD-BE WORLDS (Casti) the investigator can delineate an entire envelope of possibilities.  Computer models are objective in the sense that they are accessible, open to public scrutiny and amendment.  In a deterministic world with physical constraints on predictability, they are insightful in the general sense of setting limits to what can and cannot be.


(Hendrickson, chart 1, page 2.)

Within anthropology, we have a small and dedicated group of people on one side of the digital divide creating computer simulations, models that embody new ways of thinking and knowing about culture.  On the other side we have those who study us who do the modeling, but from the pre-computational perspective of turn-of-the-century ethnography.  They see our practices and compare them to pre-industrial religious beliefs.  Folks in the middle conflate these two approaches and dismiss the struggle with amusement, choosing not to engage in the deep and fundamental issues widening the gap between the two.  Each group stands on the opposing bank, buttressing a lengthy span across a philosophical partition separating profoundly incompatible epistemologies.  The situation is not unique to anthropology; the same stances are also taken, to varying degrees, among the other social sciences as well.  What is at stake is nothing less than the need to re-evaluate what it means to do social research in the sciences and the humanities.  The argument comes down, quite literally, to re-examining the meaning-generating process of “re-“ when we ask the question, “How should we re-cognize and re-present cultural knowledge?”  The emphasis is on the “re” of how we should recognize and represent our culture experience.  The struggle is by no means new, but it has been continually redefined for us ever since the 1950s, since the discovery of computation and the advent of digital computers.  This chapter is intended to engage the reader with some new ideas as we begin to rethink and reinvent the social sciences. 

Let’s look at where we stand with respect to the other sciences.  If rising trends in new ideas are seen as harbingers of progress, then the non-social sciences have succeeded admirably in applying chaos and complexity theories and computer simulation in their work.  Leslie Henrickson (page 2) charts the occurrence of keywords related to those new sciences of complexity in 5000 journals over the last 30 years.  The climbing curve represents the “hard” sciences, with articles on non-linear science applications (Sci-NL) tracking closely those on computer simulation science (Sci-CS).  For the social sciences (SSci) the picture is not so bright.  We have barely moved ahead.  Why have we lagged behind?  Perhaps we’re not used to thinking computationally?  Perhaps the price of entry and learning curves have been too high?  Perhaps culture is so much more complex than the phenomena in the other sciences that our use of their techniques should not serve us as a model?  I will argue that the main reason for our lag is the first, and that we can correct it with a greater understanding of the benefits of computer modeling.


The computer is not as much an invention as it is a discovery.  Computers neither began, nor will they end, with the technological medium of electronic circuitry etched onto silicon.  It is important to realize that what constitutes a “computer” changes through the years, coevolving with the cultural and technological practices of the period.  Significantly, it also means that there is continuity in the concept of computation that transcends the cultural milieu and the technological medium in which those computations take place.  The computer, as a box of circuit boards and chips, may be more apparent than the concepts underlying its creation, but it is the processes that we have discovered and captured in its operation that are, in the long run, more important than physical form.

TIME cover, January 23, 1950

1950 is a convenient date to fix the growing excitement and dis-ease about computers.  As automation was encroaching on our physical environments, so too were computers encroaching on our mental lives. TIME’s cover story on January 23 features a painting by Boris Artzybascheff, well known for his Gieger-like man-machine cyborgian creations.  In this instance his painting is Harvard’s then new Mark III, built for the Navy’s Bureau of Ordnance.  The cover caption reads, “Can man create a superman?”  The article, which follows in the science section, is headed by the title, “The Thinking Machine.”  After recounting the imposing size and speed of these devices, the writer was unable to resist the quip that the Mark III “at work… roars louder than an admiral” (55).  The article quickly launches into the philosophical implications of “cyber shock” citing TIME’s review of Norbert Wiener’s Cybernetics one year earlierWiener is foregrounded pointing out the similarities between our natural brains and these artificial ones, explaining that the creative potential of our inventions is handicapped simply by the fact that these devices have no sensors (eyes and ears) or effectors (arms and legs).  Why shouldn’t they, he asks, as they usher in the second industrial revolution?  Howard Aiken is cited, predicting that these mechanisms “should be able to forecast economic rain and sunshine,” (59) but expressing doubts about their potential for creativity, claiming that no machine will ever likely have imagination; they “are mere tools that do only what they are told.”  (56)  Warren McCulloch is rallied to the side of the “machine-radicals,” with the retort that the human “brain is actually a computer, and very like computers built by men.”  He explains that the brain is built from “living electrical relays, comparable to the relays and vacuum tubes in the machines.”  It “is a lacelike network of relays and conductors.”  (56)  Claude Shannon, “thinks that one could play well enough to beat all except the greatest chess masters.”  (59) 

The editors of TIME were clearly into provocation.  The fusion of men and machines, so aptly visualized by their cover and tag lines is repeated:  “Computermen point out that the human brains and machines speak basically the same language.”  (59)   But they caution, “Some philosophical worriers suggest that the computers, growing superhumanly intelligent in more & more ways, will develop wills, desires and unpleasant foibles of their own, as did the famous robots in Capek’s R.U.R. (Rossum’s Universal Robots).”  (59-60)  These philosophical questions are revived today in the the fields of artificial life, artificial societies, artificial culture, and evolutionary computation

The machines of the time were largely dedicated to military work.  When any time became available to outsiders, it was priced at $300 per hour (1950 dollars - roughly $2000 per hour adjusted to the consumer price index).  It is hard to imagine that the authors had in mind the proliferation of computing power today when they wrote,  “Computing machines are very expensive at present; Mark III cost $500,000.  But they are becoming simpler, as well as more intelligent, and their cost can be cut enormously by commercial production methods.  It is almost certain that they will come into wide use eventually.”  (59)  It is precisely this process of commodification that has given us the opportunities for simulation and modeling that we have today.

  (Intel Corporation)

Current microprocessor technology began with plans to build a business calculator.  Busicom, a Japanese firm, approached Intel for the design.  Intel suggested integrating most of the processing on a single chip and set out to build a product that would outperform the IBM 1620, which would have filled a living room.  The chip, designed by Fedrico Faggon in 1971, was named the 4004 and is now a collector’s item.  The chip and calculator were a technical success but the calculator business was a failure.  Intel repurchased the rights to its own invention, and turned the disaster into a product line that catapulted them to the position of the most successful manufacturer of microprocessors.   Integrated circuit technology had taken off years earlier.  Microprocessors now began to ride a curve of exponential growth.  Earlier, in 1965, Gordon Moore predicted that the number of transistors printable on an integrated circuit would double every 18 months.  It has done just that through to the present.  (Intel 4004) 

Where this curve will take us in the future is uncertain, but clearly it will move beyond the desktop computer and graphics user interfaces (GUIs).  Our desktop computers are severely challenged in their abilities to develop lifelike human properties and we have only ourselves to blame.  What knowledge do they have of the world when their sensors are limited to keys to press and balls to roll?  What influence do they have on it when they can only excite phosphors on a screen, impress dots of ink on paper and shake a speaker cone?  Their operating systems keep them in an idle state waiting for our tactile stimulation.  We have them perform tasks that are mind numbing, repetitious.  They do not learn, they do not evolve, because we prevented it.  They have no self-awareness because we gave them none.  But the situation is changing.  Desktop computers can be readily connected to the analog outside world with sensors.  DNA computing is already a reality.  Research is being done on the prospects of quantum computing.  Hybrid circuits integrate living wet and squishy neurons with silicon circuits creating cyborg brains.  We don’t know where this all will take us.

Computation and Evolution

Jay Forrester expressed the relationship between evolution and computation well in 1971.  Not only did he imply that the human mind was a naturally evolved computer, he insisted that the software and hardware inventions should take over much of the thinking of our minds.  In an article entitled the “Counterintuitive Behavior of Social Systems” he explained:

It is my basic theme that the human mind is not adapted to interpreting how social systems behave…  In the long history of evolution it has not been necessary… to understand these systems until very recent… times…  Evolutionary processes have not given us the mental skill needed to properly interpret the dynamic behavior of the systems of which we have now become a part…  Until recently there has been no way to estimate the behavior of social systems except by contemplation, discussion, argument, and guesswork.

One often hears the sentiment from computermen that any philosophers worth their salt have long since transferred to computer science departments.  Marvin Minsky, himself a noted philosopher of artificial intelligence, in a 1996 public lecture kicking off the Artificial Life 6 conference in Nara, Japan, succinctly articulated the unprecedented importance of computation, while candidly observing that philosophers with no knowledge of the field have little to offer in the way of guidance for the future:

I think that Computer Science is the most important thing that’s happened since the invention of writing.  Fifty years ago, in the 1950s, human thinkers learned for the first time how to describe complicated machines.  We invented something called computer programming language, and for the first time people had a way to describe complicated processes and systems, systems made of thousands of little parts all connected together:  networks.  Before 1950 there was no language to discuss this, no way for two people to exchange ideas about complicated machines.  Why is that important to understand?  Because that’s what we are.  Computer Science is important, but that importance has nothing to do with computers.  Computer Science is a new philosophy about complicated processes, about life, about artificial life and natural life, about artificial intelligence and natural intelligence.  It can help us understand our brain.  It can help us understand how we learn and what knowledge is. 

Aristotle, Kant, Descartes, and other philosophers didn’t know that you need an operating system, the part of the brain that does all of the housework for the other parts, to use knowledge.  So all philosophy, I think, is stupid.  It was very good to try to make philosophy.  Those people tried to make theories of thinking, theories of knowledge, theories of ethics, and theories of art, but they were like babies because they had no words to describe the processes or the data.  How does one part of the brain read the processes in another part of the brain and use them to solve a problem?  No one knows, and before 1960 no one asked.  In a computer the data is alive.  If you read philosophy you will find that they were very smart people.  But they had no idea of the possibilities of how thinking might work.  So I advise all students to read some philosophy and with great sympathy, not to understand what the philosopher said, but to feel compassionate and say, “Think of those poor people years ago who tried so hard to cook without ingredients, who tried to build a house without wood and nails, who tried to build a car without steel, rubber or gasoline.”  So look at philosophy with sympathy, but don’t look for knowledge.  There is none. 

The idea to carry away is that Computer Science is not about computers.  It’s the first time that we’ve begun to have ways to describe the kinds of machinery that we are.  We may not be exactly right because we started just 50 years ago, but I think ours is the right direction.  It will help us understand what we are. 

“Were they reborn into a modern university, Plato and Aristotle and Leibnitz would most suitably take up appointments in the department of computer science.”  So reads the dust jacket of ANDROID EPISTEMOLOGY, a compendium of articles “exploring the space of possible machines and their capacities for knowledge, beliefs, attitudes, desires and action.”  (p xi.)  The last article poses the question, “should thinking machines be granted rights.”  Minsky answers with a dialog between two alien intelligences who debate whether humans should be granted rights.

We have long believed that all intelligent machines evolved from biologicals, but we have never before observed the actual transition.  (p. 307)

Those interested in a flourishing philosophy of computation should read Von Neumann’s THE COMPUTER AND THE BRAIN, Paul Churchland’s THE ENGINE OF REASON – THE SEAT OF THE SOUL and Patricia Churchland’s THE COMPUTATIONAL BRAIN.  Donald Knuth expressed the idea irreverently in WIRED:  “Science is what we understand well enough to teach to a computer.  Art is everything else.”  Computational artists might disagree.

Science is the practice of building increasingly reliable, comprehensive and leveraged representations of the world.  We are all in the habit of building representations in our minds and in our works - it is simply in our nature.  In a Zen-like meditative state of nature we can attempt to let some go, but never all of them and rarely any of them for very long.  They are embedded in different media, some internally in the sense that their medium is the mind itself, some externally, both in material manifestations and transitory ephemera.  Representations are literally “re-presentations,” transforming one presentation of the world into another in a different medium.  Each representation has its own subject content, its own unique qualities, potentialities and limitations, and each has its own audience in the sense of where it will be represented next.  We often sort ourselves by the medium in which our representations will take place.  Fine artists specialize in 2D and 3D visual, tactile and multimedia models, performance artists in scripting and enacting experiences on stage, film and on the screen.  Literary types traditionally pen one-dimensional linear sequences of discursive narratives in ink on paper.  Engineers build 2D drawings and 3D physical models, testing them while scaling up to industrial dimensions.

In the social sciences, we think, observe, and listen, but most of all we talk and write.  We publish or we perish, composing articles, reports and books, again with ink on paper.  As social scientists, we fall prey to the same limitations we observe in the objects of our studies.  When we do turn to our computers chances are we use them to capture text for publication.  We are slow to see the opportunities to avail ourselves of new technologies in innovative ways.  Many times we are content to simply replace that ink on paper with colored phosphors on a screen.  We largely publish text, mediated by computers, but don’t make full use of their capabilities. 

Men make their own history, but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly encountered, given and transmitted from the past. The tradition of all the dead generations weighs like a nightmare on the brain of the living. And just when they seem engaged in revolutionising (sic) themselves and things, in creating something that has never yet existed, precisely in such periods of revolutionary crisis they anxiously conjure up the spirits of the past to their service and borrow from them names, battle cries and costumes in order to present the new scene of world history in this time-honoured disguise and this borrowed language.  (Marx p. 398)

Marx was speaking specifically of the French Revolution when the leaders of the new regime donned themselves in the garments of the classical period.  Such holdovers in stylistic form, which are no longer attached to their original context or function, are called “skeuomorphs” by archaeologists.  From the perspective gained by two centuries of distance, the theatrics of the French Revolution seem tragic but almost comical.  Yet for security we still fall back on what we know.  If the social sciences cannot free themselves from the yoke of natural language they can be excused, in part, by the knowledge that we all fall into this same trap.  In the cutting-edge fields of evolutionary computation, people argue about the relative efficiencies of cultural versus genetic algorithms in solving problems.  It was recently claimed that since cultural evolution finds solutions to new problems much faster than genetic evolution in the natural world, it therefore follows that a cultural algorithm should do its work more quickly than a genetic algorithm in a machine.  The logic, although clear, is faulty.  The reason culture works more quickly than biology in the natural world is that the medium in which the two pass on their solutions is qualitatively different.  Consequently the rate at which generations of new ideas and organisms are produced is vastly different.  Let’s say, for the sake of argument, that ideas regenerate, generously, at one per second and that children reproduce every 15 years.  That would mean that culture inherently creates new solutions 473,040,000 times quickly than biology.  This reasoning is appropriate for the natural world, but in the artificial world inside a computer, operating in the medium of silicon, there is no need to handicap biology by instantiating the slowness of its genetic operation in the natural world.  (Gessler 1998.)  It’s a new world inside silicon.  Nothing is the same.  We cannot afford to ignore new things in the social sciences, especially when that newness began fifty years ago.

Computer modeling is yet another representational skill whose exercise would benefit our teaching and research.  We are half a century behind exploring the cutting-edge technologies developed in the 1950s.  We can no longer say, “we can’t afford it.”  Certainly, computists in those days had the benefits of big-science funding, but we have many times the power on our desktops now.  We can no longer say, “Programming is too difficult.”  Visual and rapid application development environments allow us to get directly to the task of translating the logic of our ideas into working applications.   We no longer need to hone the skills of “seeing” patterns in large datasets of raw numbers.  We can visualize them as colors and textures in multidimensional spaces, sonify them as rhythms and melodies or translate them to a variety of uncustomary sensory modes.  These aids in understanding applications are important in their building too.  We can use them to see and hear the behavior of the code we write, enabling us to detect peculiarities in its operation in real time, opening new avenues for correcting our software.

Imaginative Challenges

Ed Fredkin, a physicist from MIT and researcher of cellular automata, argues that the world is a universal cellular automata computer and that we, and the objects and processes that surround us, are all programs running on that underlying machine.  (Wright.)   This is not meant to diminish our self-image by suggesting that we are “merely” programs like the simple agents in today’s simulations.  Just as those simple agents have no understanding of the machines on which they run, we, as humans infinitely more complex than they, stand in the same relationship to higher reality.  The external world, the ultra-supercomputer, stands beyond our imagination.  If this is so, then how are we to acquire any knowledge of its operation?  The question parallels the same question traditionally asked by philosophers of science, but reframed in computational terms.  What we perceive is a reality comprised of representations compounded one inside the other like a set of nesting Chinese dolls.  We live inside a cluttered hallway full of mirrors, some obscured, some warped and rippled, some reflecting others, and by means of these mirrored images we try to see outside.  Granted, mapping social interaction is like trying to map the architectural plan of a hall of mirrors, not objectively floating up above, but subjectively feet on the ground face to face with the mirrors themselves.  Each of us tries to discern what is in the other’s mind, and each of us tries to manipulate how we are perceived.  “I thought that she thought that he thought that you…,” we quickly exhaust our ability to compute the growing tree of possibilities.  We learn from our mistakes, from the flaws and illusions we discover in the system. 

It is unfortunate that we, as scientists, don’t write more speculative fiction, for it would open a window to our imaginations through which our readers might catch glimpses of our visions and our inspirations as well as our doubts and joys.  There are two notable examples that echo Fredkin’s story, exploring the entailments of creating nested simulations.

Stanislaw Lem, a Polish writer, appears to be the first to describe and critique a research program in computational cultural evolution.  In 1971, in a volume called A PERFECT VACUUM, he published a short story entitled “Non Serviam.”  The story passes as a review of a nonexistent book by a nonexistent author named Professor Dobb.  The Latin title, translated into English as “we shall not serve,” refers to the conclusion of a theological discussion on the part of artificial agents.  It was the answer to their question, “if some intelligent being created us, do we owe him any service?”  The professor, after having created numerous artificial cultures in an academic institution, reflects on the course of his research and laments the loss of funding requiring him to pull the plug. 

Professor Dobb’s book is devoted to personetics.  The name combines Latin and Greek derivatives: “persona” and “genetic” --- “genetic” in the sense of formation, or creation.  The field is a recent offshoot of the cybernetics and psychonics of the eighties, crossbred with applied intellectronics.  To date we have nearly a hundred personetic programs.  At present a “world” for personoid “inhabitants” can be prepared in a matter of a couple of hours.  This is the time it takes to feed into the machine one of the full-fledged programs (such as BALL 66, CREAN IV, or JAHVE 09).  First, one supplies the machine’s memory with a minimal set of givens.  This substance is the protoplasm of a universum to be “habitated” by personoids.  A specific type of personoid activity serves as a triggering mechanism, setting in motion a production process that will gradually augment and define itself; in other words, the world surrounding these beings takes on an unequivocalness only in accordance with their own behavior.  Each of them is an individual entity; their differentiation is not the mere consequence of the decisions of the creator-programmer but results from the extraordinary complexity of their internal structure.  As hundreds of experiments have shown, groups numbering from four to seven personoids are optimal, at least for the development of speech and typical exploratory activity, and also for “culturization.”  At present it is possible to “accommodate” up to one thousand personoids, roughly speaking, in a computer universum of fair capacity; but studies of this type, belonging to a separate and independent discipline --- sociodynamics --- lie outside the area of Dobb’s primary concerns. 

The behaviorists desire to observe synthetic beings of intelligence, to listen in on their speech and thoughts, to record their actions and their pursuits, but never to interfere with these.  Now in the planning state at MIT are programs (APHRON II and EROT) that will enable the personoids --- who are currently without gender --- to have “erotic contacts,” make possible what corresponds to fertilization, and give them the opportunity to multiply “sexually.”  There has come into being, within the personoid population, a whole series of varying explanations of their lot, as well as the formulation by them of varying, and contending, and mutually excluding models of “all that exists.”  That is, there have arisen many different philosophies (ontologies and epistemologies), and also “metaphysical experiments” of a type all their own. 

In Dobb’s Afterword we find this statement:  “It was I, after all, who created them.  I am the Creator.  I can enlarge their world or reduce it, speed up its time or slow it down, alter the mode and means of their perception; I can liquidate them, divide them, multiply them, transform the very ontological foundation of their existence.  I am thus omnipotent with respect to them, but, indeed, from this it does not follow that they owe me anything.  The bills for the electricity consumed have to be paid quarterly, and the moment is going to come when my university superiors demand the ‘wrapping up’ of the experiment --- that is, the disconnecting of the machine, or, in other words, the end of the world.  That moment I intend to put off as long as humanly possible.”   (Lem 1979, edited condensation.)

Lem has written more extensively about the evolution of such simulations in his large book SUMMA TECHNOLOGIAE (1982), a play on the Summa Theologica by Thomas Aquinas.  It has never been translated into English but there are editions in both Polish and in German.  Consider the artistic decisions that one must make if he were ever to translate this story into film.

A comparable project has been undertaken by Alex Proyas, Lem Dobbs and David Goyer when they revised an earlier screenplay and released it as the film DARK CITY in 1998.  Roger Ebert referred to it as the best film of the year.  In this dark vision alien scientists experiment on human subjects altering their memories and the architecture of their city.  John Murdock, one of the subjects of their experiments, wakes up to realize he’s a puppet on a string.  He is wanted for a murder which he cannot remember.  The police, also subjects of the experiments, are investigating the case.  Inspector Frank Bumstead has been given the case to replace Eddie Walenski, who has apparently gone mad.  Eddie says the city is a spiral maze, a motif earlier reflected in a rat maze supervised by Dr. Daniel Schreber, a human who has collaborated with the aliens in devising the experiments.

Dark City

Bumstead asks what happened to Walenski to make him quit the case:

Walenski:  Well, nothing happened Frank.  I’ve just been spending time in the subway, riding in circles, thinking in circles.  There’s no way out.  I’ve been over every inch of this city.
Bumstead:  You’re scaring your wife to death, Eddie.
Walenski:  She’s not my wife.  I don’t know who she is.  I don’t know who any of us are.
Bumstead:  What makes you say that?
Walenski:  Do you think about the past much, Frank?
Bumstead:  As much as the next guy.
Walenski:  See?  I’ve been trying to remember things, clearly remember things from my past.  But the more I try to think back, the more it all starts to unravel.  None of it seem real. It’s like I’ve just been dreaming this life and when I finally wake up I’ll be somebody else.  Somebody totally different.
Bumstead:  You saw something, didn’t you Eddie?  Something to do with the case?
Walenski:  There is no case, there never was!  It’s all just a big joke!  It’s a joke!

Dr. Daniel Schreber, who narrates the story, explains the city’s mysteries:

I call them the strangers; they abducted us and brought us here.  This city, everyone in it, is their experiment.  They mix and match our memories as they see fit, trying to divine what makes us unique.  One day a man might be an inspector, the next, someone entirely different.  When they want to study a murder, for instance, they simply imprint one of their citizens with a new personality.  Arrange a family for him, friends, an entire history, even a lost wallet.  Then they observe the results.  Will a man, given the history of a killer, continue in that vein, or are we, in fact, more than the mere sum of our memories?  This business of you being a killer was an unhappy coincidence.  You have had dozens of lives before now. You just happened to wake up while I was imprinting you with this one.

When they first brought us here they extracted what was in us, so they could store the information, remix it like so much paint, and give us back new memories of their choosing.  But they still needed an artist to help them.  I understood the intricacies of the human mind better than they ever could.  So they allowed me to keep my skills as a scientist because they needed them. They made me delete everything else. 

Another film focusing on simulation and the problem of knowing the nature of the real world is THE THIRTEENTH FLOOR, where there is a murder to be solved, a suspect and a detective.  Josef Rusnak and Ravel Centeno-Rodriguez created a subtle story of mystery and intrigue wherein a simulation is created, ostensibly as commercial venture, but misused by its creators for sex and entertainment.  Detective Larry McBain questions Whitney, a senior researcher, about the murder of his boss prying into the secret work they do in their lab: 

McBain:  So the whole thing’s what, a giant computer game? 
Whitney:  No, not at all.  It doesn’t need a user to interact with it to function.  It’s units are fully formed, self-learning cyberbeings. 
McBain:  Units?
Whitney:  Yeah, electronic simulated characters.  They populate the system.  They think, they work, they eat…
McBain:  (Interrupts.)  They fuck?
Whitney:  Well let’s just say that they’re modeled after us.  Right now we have a working prototype: Los Angeles circa 1937. 
McBain:  Why ’37?
Whitney:  Fuller wanted to start by recreating the era of his youth.  You see, while my mind is jacked in, I’m walking around experiencing 1937, my body stays here and kind’ a holds the consciousness of the program link unit.
McBain:  You think one of them units crawled up an extension cord and killed its maker?

Ashton, living in the 1937 simulation, has opened a letter intended for president of the company in the real world and he confronts another senior researcher, Douglas Hall, who is visiting Ashton’s world, with the alarming news:

I did exactly what the letter said.  I chose a place that I’d never go to.  I tried to drive to Tucson.  I figured, what the Hell, I’ve never been to the countryside.  When I took that car out on the highway I was going over 50 through that desert.  After a while I was the only car on the road.  There was just me and the heat and the dust.  And I did exactly what that letter said, “don’t follow any road signs and don’t stop for anything, not even barricades.”  But just when I should have been getting closer to the city, something wasn’t right.  There was no movement, no life.  Everything was still and quiet.  And then I got out of the car, and what I saw scared me to the depths of my miserable soul.  It was true.  It was all a sham.  It ain’t real.

The Thirteenth Floor

Later we find that McBain was almost right.  The consciousness of Ashton has managed to escape the simulated world of 1937 and now finds himself in Whitney’s body.  Ashton has moved up one level to an unfathomable world, which seems to be run by gods.  Again he confronts Hall, but this time on Hall’s turf.

Ashton:  I want to see it.
Douglas Hall:  See what?
Ashton:  My arcade game.  (A metaphor for the computer in which he was born.  Douglas escorts Ashton to the elevator.  They take it to the 13th floor where the mainframe computers are.)
Ashton:  (Breathing in the warmth of the machine.)  It’s breathing…  So this is where I was born? 

Whereas DARK CITY overtly explains what is really going on, THE THIRTEENTH FLOOR leaves much of the subtlety of the story unexplained.  The nuances of the acting in THE THIRTEENTH FLOOR can only be appreciated once you know who is really whom, and this won’t really hit you ‘til the end.  Rusnak’s film is quite rich in ideas.  Fiction such as this suggests challenges and solutions to those engaged in simulating artificial culture.  We turn next to simulations themselves.

The best known of all these imaginative challenges is the Holodeck of Star Trek fame.  The Holodeck is a totally immersive virtual environment complete with sentient characters who could be based on real-life personalities and physical places complete in every detail.  In fact, the virtual nature of this artificial world was so real that all the characters and objects had physical presence.  The Holodeck could be a dangerous place: an altercation with computer-generated character could be harmful and a virtual bullet shot from a virtual handgun could kill.  The Holodeck has become the crucial metaphor for a $45,000,000 project by the Department of the Army and Paramount Studios at the new Institute for Creative Technologies in the Marina del Rey.

In all these fictions, as in the real-life practices on which they are based, simulations are used for problem solving, enlightenment, recreation, love and sex.  The most compelling simulations are those that mimic evolution, learning and change, those models in which order emerges on its own from an unordered set of primitives described at a lower level of complexity underneath.  These narratives, while fiction, provide us with strong computational challenges.  Can we write the software code to do the things that they suggest?  That is the open challenge.

Artificial Culture

Artificial culture (Gessler 1994, 1995) is a place in space situated amidst three axes.  Each axis expresses a different mode of complexity: intellectual (individual and cognitive), social (populational and heterogeneous) and environmental (natural and artificial).  Since there are limitations on what we can capture in a model or a simulation, represented by the volume circumscribed by these three axes, we must decide how much of our limited computational resources to devote to each of these three domains.  Simulations stressing individual intellectual complexity naturally lead to the fields of artificial intelligence and expert systems.  Those investigating large populations of diverse but cognitively impoverished individuals with high degree of social complexity lead to artificial life.  Those with rich environmental complexity having few thinking inhabitants lead to virtual environments, and augmented realities.  Artificial societies are situated somewhere in between, borrowing theories and methods from all three.   Artificial culture, which has yet to be fully realized, extends the program of artificial societies by adding richer modes of thought, richer social interactions and more richly constructed human spaces.  The philosophy of artificial culture has yet to be addressed, but it is not difficult to see beginnings in Margaret Boden’s THE PHILOSOPHY OF ARTIFICIAL LIFE.

Our models need to be concerned with both computational effort and cognitive load.  These translate roughly, into the relative difficulty of computing or reckoning any particular idea.  In AN INTRODUCTION TO NATURAL COMPUTATION, Ballard refers to this as:

The minimum-description-length principle (which) measures both execution speed and repertoire.  It balances the helpfulness of the programs, in terms of the degree of succinctness of their operations, against their size or internal coding cost.  (Ballard, p. 27)

We cannot model the entire world from quark to quasar, either in our heads or in our simulations.  What we can do however is to model it in parts.  By optimizing these parts we can produce a compressed (lossy or lossless) model that may suffice.  Emotion, once considered alien to machines can now be envisioned and encoded in machines as Picard demonstrates in AFFECTIVE COMPUTING.  We might consider emotion to be highly compressed:  countless positive experiences, the details of which escape our memory, are subsumed by an immediate and overwhelming sense of friendship, trust or love.  Just as there are limits to just how much we can put inside a computer and how quickly a computer can compute it, called computational effort, there are limits to just how much we can remember in our heads and how quickly we can think a problem through, called cognitive load.  These limits are what motivate us to turn to computation as a new way of knowing, but in the process we come to realize that these same limitations exist in the minds of the people that we study.  Consequently, we have to think about cognitive load in the natural world of culture and how people came to deal with it, and we have to think about how to model strategies of dealing with cognitive load as computational effort inside our simulations.

The realization that “the world is the best representation of itself” is a step to crumble the constraints imposed by cognitive load.  We can move some of these cognitive routines outside our heads and instantiate them in the world outside.  Cognition can then be delegated to either artifacts or other people.  As an artifact, a road sign may be placed to mark key turning points in a route to work each day.  In a wilderness with no signs posted it is only necessary to remember key landmarks and to rely on the world, as it passes by, to fill in the missing gaps.  Can you remember all the details of a daily drive: the landmarks and the signs?  At what level of detail does your memory fail?   When you drive the route again how much more than you recall do you recognize along the way?  How often, when your mind is elsewhere, do you drive on autopilot and miss a turn because a landmark that you passed led you down an old familiar route rather than the one you should have taken?  Even in smaller quarters, the arrangement of living and working spaces provides a shortcut for knowing what is where and what is to be done next.  In addition to cognitive artifacts, cognition is clearly distributed among other individuals.  Each performs a specialized cognitive task in a specialized cognitively enriched space.  Edwin Hutchins book, COGNITION IN THE WILD, is a study of spatiallydistributed thinking, something that needs more investigation in the social sciences. 

The notion of distributed cognition may be applied not only to mind distributed across other individuals and other objects, but also to the nature of the individual mind itself.  Marvin Minsky discusses a number of possible types of cognitive agents all resident at one time in the individual in his SOCIETY OF MIND.  The mind, he suggests, made up of a variety of modular agents, each on thinking about the task it has before it, each one negotiating with its neighbors in a manner analogous to the way members of a society interact with one another.  The agents vie for influence by forming alliances and voting.  Perhaps “society of mind” is not simply a metaphor.  Perhaps it is more than an analogy, a similarity due to factors other than a common origin.  Perhaps it has a homologous relationship, being similar in origin with the social life that structured it.  Insofar as each of us has theories of mind, models as to how many other people think, it would not be surprising to suppose that those modules might be related to the agents that Minsky suggests.  After all, mind did coevolve with culture; certain patterns of organization raised the fitness levels of certain patterns of thought, and vice versa.  Evolutionary psychology carries the concept further claiming that the modules of mind are themselves highly specialized computers running programs that coevolved with one another and with the conditions our adaptations as hunter-gatherers during the Pleistocene epoch, such as cooperation and cheater detection, mate choice and spatial orientation.

This rich array of cognitive specializations can be likened to a computer program with millions of lines of code and hundreds or thousands of functionally specialized subroutines.  (Barkow 39)

My hunger for advances in computational modeling stems from an annoying absence of intradisciplinary unifying theory in anthropology, archaeology and geography, theory that might make each a cohesive discipline, and also from a deficiency of interdisciplinary theory, theory that might unite the three.  I hunger for an overarching transdisciplinary theory that might bind the “soft” social with the “hard” physical sciences with a common framework.  Systems theory, inspired by earlier days of computation aspired to do just that.  James Miller compiled a massive volume on LIVING SYSTEMS in 1978, detailing hypothetical commonalities in structure and process across seven levels of emergent organization, from the cell, organ, organism, group, organization, society through the supranational system.  It was an ambitious undertaking then, one in need of revitalization today.

Building on precedents set in artificial life and artificial societies, artificial culture should add intellectual, social and environmental complexities to the world that it creates.  It should enrich the societies of minds of its agents with other than Western ethnocentric ways of thought drawing its inspiration from minds from different cultures across the world and back into time.  It should add a larger repertoire of social interactions based on the exchange of goods and information, information about goods and information, and information about the quality of the foregoing.  It should add agents who manufacture artifacts and architecture to create a culturally transformed natural environment.  Artificial culture make possible navigating nested representations and conflicting logics in differing contexts, realizing that “an enemy of my enemy is not necessarily my friend” and that “loving, making love and being in love” are not the same experienced first hand, read in literature or viewed on the screen.  Artificial culture should be ubiquitously multi-agent, scaled from the bottom-up in an emergent, evolutionary and causal sense.  It should be the synthetic complement to the analytic method.  Once a system is analyzed and decomposed into its constituents and those parts understood in isolation, they should be reconstructed as a larger whole, a synthesis.  Top-down causation is not denied, it rather needs to be explained by the bottom-up climb that took it to its top-down heights.  Finally, artificial culture should be minimal.  It should leverage the most explanatory power from the least complex formulation.  What is, and what is not, artificial culture is based on an intuitive judgment of what constitutes a satisfying and holistic mix of primitives, the proper repertoire of raw materials from which to assemble novel interactions.

John Holland’s ADAPTATION IN NATURAL AND ARTIFICIAL SYSTEMS is a classic in evolutionary modeling with broad-spectrum applications in the social as well as biological sciences.  In it, Holland introduces Echo as a gedanken, or thought, experiment designed to give qualitative insight into the evolution of a multiagent spatial system rather than to yield precise predictions for any specific situation.  Agents in Echo wander across a geographic world encountering other agents in search of resources necessary to survive.  A pair of agents is selected from a local group and offered the opportunity to engage in combat, trade and mating, in that order.  Each acquires its fill of resources from its locale.  Each agent is then charged a subsistence fee and deleted if it cannot pay.  The agent’s reservoir of collected resources is then tested; if it has enough it replicates.  An agent missing elements necessary for reproduction then migrates to a site richer in what it needs.   The building blocks of Echo are quite minimal indeed, yet they provide a conceptual foundation for more lifelike models of cultural interaction.  (Holland, 1995pp. 194-198.)  Holland revisits his Echo model again in the context of EMERGENCE – FROM CHAOS TO ORDER (1998) along with his personal thoughts on how an interdisciplinary science should be done.

ARTIFICIAL SOCIETIES – GROWING SOCIAL SCIENCE FORM THE BOTTOM UP, by Josh Epstein and Rob Axtell, is a milestone publication, probably the closest to that ideal generative mix in scope and scale that is likely to inspire models of evolving culture.  (Gessler 1997.)  

We view artificial societies as laboratories where we attempt to "grow" certain social structures in the computer --- or in silico – the aim being to discover fundamental local or micro mechanisms that are sufficient to generate the macroscopic social structures and collective behaviors of interest… Artificial society-type models may change the way we think about explanation in the social sciences… Perhaps one day people will interpret the question, "Can you explain it?" as asking, "Can you grow it?" (Epstein and Axtell, pp. 4, 19, 20.)

The authors target economic theory but their work has broader implications for the other social sciences and anthropology which rely less on formal language models of interaction.  The world they offer us is Sugarscape, their CompuTerrarium, a caricature of proto-history inhabited by agents who can extract resources, trade by negotiating the prices of exchange, cooperate and quarrel, engage in sex, do combat, kill, rob and catch diseases.  Each may have a different vision metabolism, cultural membership and character.  Between each pair of agents are measures of affinity and friendship and memories of trust.  Although it is easy to dismiss this all as a cartoon, it defies discursive social scientist to create something better.  Their take home message is that the canonical ideas of economics don’t hold up under the more realistic assumptions of artificial societies:  heterogeneity in agents’ thoughts and actions (not normative aggregate populations), spatial geography (not simply point kinetics) and the bottom-up emergence of social order (not the top-down imposition of ideals). 

Robert Axelrod, in his book THE COMPLEXITY OF COOPERATION, presents “a model” of culture based upon three principles:  agent-based interaction, no central authority, and adaptive rather than rational agents, to show how global polarization can be generated from the convergence of local social influences such as beliefs, attitudes and behaviors.  These three constituents of culture are represented in the simulation by five abstract features, each of which taking on one of ten traits.  These bare-bones cultures inhabit the cells of a small grid.  One is picked at random along with a neighbor, with a chance of interacting in proportion to their similarity.  If there is interaction, the picked cell takes on one attribute of the neighbor and thus increases its chances of sharing traits in any future interchange.  The similarities and differences among the cells are dynamically mapped on the spatial grid as each cultural cell is repeatedly interrogated.  Stable regions increase with the number of traits per feature and decrease with the number of features added, the range of interaction and the size of the overall territory.  Although Axelrod ignores the epigenetic interactions among cultural traits and features, he introduces these and other interactions with the environment as extensions, challenges for future work.


Steven Grand was among the first to turn an evolving community of artificial life creatures into a viable and entertaining product.  Amidst all the presentations at the first conference on Autonomous Agents, which included applications in engineering, commerce, industry and a demonstration of prototypes for the Sony Aibo® robotic dog, Grand’s CREATURES won the acclaim of the post conference evaluation panel as being the best most memorable work.   As the creator of this world, Grand felt obliged to create an entire ecosystem for his bright-eyed Norns and other critters to flourish in.  There is minimum specification for nurturing the whole of intelligence or life, which he jokingly refers to as “the whole iguana.”  A creature must have a brain of neurons whose connections can be reinforced for remembering important things and reused for forgetting unimportant things to make room for the new.  Each brain must learn on its own and focus on the most situationally relevant objects in its environment. Each must be tightly linked with its body, to goals and emotional qualities like pain, hunger, sleepiness, exhaustion, boredom and sexual attraction.  Survival in its environment is paramount, so it must learn to eat healthful foods and avoid toxic substances.  It must learn to avoid creatures carrying disease and must have the benefit of an immune system to combat infection.  It must have the ability to learn to speak, and if not taught English must invent a language of its own.  On reaching sexual maturity it must be able to court, conceive and have kids.  Grand achieved all that through what he calls “God’s Lego® set” of feedback loops.   

Lego® Mindstorms™

Lego®, as a result of collaborations with the Massachusetts Institute of Technology has made even more components available to anyone who would like to create artificial life, societies or culture.  In addition to the usual complement of plastic parts, motors, lights and switches, they now include a yellow fist-size brick.  The brick is a completely programmable microcomputer, which interfaces directly with the outside world through three sensors, three actuators and an infrared communications link.  It is called the RCX microcontroller brick.  The device is sold as the

Mindstorms™ Robotic Invention System.  The bricks are programmable from a PC with visual puzzle pieces standing in for program statements, which can be dragged-and-dropped to write an application.  Once programmed the bricks can reprogram one another or collect and exchange information.  This makes experiments in robot cooperation and competition possible.  It is also feasible to use them to log scientific data or mediate and track exchanges, as wearable computers, among real human actors taking part in live simulation.  Resources for researchers are growing, including new operating systems and programming languages like Not-Quite-C (NQC).  There are already several books available including definitive and advanced guides (Baum).

The truism “the world is the best representation of itself” inspired Rodney Brooks to create a subsumption architecture for his robots, a shallow hierarchy of agents with externalized cognition, and to write two provocative articles intriguingly entitled, “Intelligence Without Reason” and “Intelligence Without Representation,” reprinted in CAMBRIAN INTELLIGENCE – AN EARLY HISTORY OF THE NEW AI.  Both were challenges to the received wisdom intellect can only be instilled from top-down.  Brooks also coined the slogan, “fast, cheap and out of control,” as a hook for his proposed robotic invasion of the planets.  He envisioned a hierarchical community of robots, hundreds of cheap explorers at the bottom of the heap, specialists and managers of different ranks in the middle, and a few expensive overseers at the top, as an exploration team for the planet Mars.  Darin Morgan’s, WAR OF THE COPROPHAGES, an episode of the X-Files, was a light-hearted satire of Brooks’ scenario.  The phrase FAST, CHEAP AND OUT OF CONTROL was also used by Errol Morris as the title for his bizarre feature documentary of Brooks and three other men, all specialists in creating uncustomary kinds of life.

Social robotics is a specialty that shares elements with artificial culture.  Echoing the limits of cognitive load and computational effort, the caveat “keep it simple stupid” (KISS) is an essential caution to designing both multiagent simulations and autonomous robot communities.  Robots, surviving in the wild where computing power is at a premium, must confront new situations and cooperate with others of their kind in a hostile world of rain, dirt, cold, dwindling energy and constrained time.  Roboticists are forced to deal with the same scale and class of material, multiagent organizational and behavioral constraints as social scientists in building models of communities.  Moreover, the physicality of robots force their makers to deal with material embodiment, situated action and materially and socially distributed cognition, all aspects of critical interest to social science.

(Thakoor 1998.)

The idea of evolving hardware as well as software to enhance the adaptability of a robotic community has been promoted by Sarita Thakoor in her several conferences on biomorphic explorers held at NASA’s Jet Propulsion Laboratories in Pasadena.  Here researchers came to share their insights into building small reconfigurable robots.  Small flying, walking, creeping, web-building and burrowing robots were planned on the scale of insects.  Again a major theme was the cooperative behavior and independence of the community required to survive on other planets, in ocean depths, in Antarctic cold and near Chernoble radiation.  Reconfigurable robotics is not the only instance of evolving hardware.  Field programmable gate arrays (FPGAs) were aboard Pathfinder and Sojourner on their mission to Mars.  The hardware logic circuit elements on these chips can be evolved as easily as software.  We shouldn’t be surprised to see computers on the market which will first look at a problem, decide what community of microprocessor types they should become in order to compute its solution and then transform themselves accordingly.  EVOLUTION ARY ROBOTICS – THE BIOLOGY, INTELLIGENCE AND TECHNOLOGY OF SELF-ORGANIZING MACHINES takes us on a tour of this field, barely a decade old (Nolfi).

Robot sociology, or more appropriately robot culture, since robotic minds are much more alien and other than the Western minds that are the object of sociology, shares many of the same aims as artificial culture.  This convergence is evidenced by a recent conference on EPIGENETIC ROBOTICS in their call for papers. 

During the last few years we have witnessed the mutual rapprochement of two traditionally very different fields of study: developmental psychology and robotics. This has come with the realization in large parts of the cognitive science community that true intelligence in natural and (possibly) artificial systems presupposes 3 crucial properties:

(a)           the embodiment of the system,
(b)           its situatedness in a physical and social environment and
(c)           a prolonged epigenetic developmental process through which increasingly more complex cognitive structures emerge in the system as a result of interactions with the physical and social environment.

Creative evolution relies on the novelty of epigenetic interactions, interactions which themselves rely a varied mix of primitive building blocks. This variety may be richly enhanced by combining elements of the real world with software simulations. In "The Evolved Radio and its Implications for Modeling The Evolution of Novel Sensors," Bird champions epistemically autonomous devices, citing examples from his work evolving hardware FPGA sensors. He argues that a software simulation can never fully capture the infinite real world of possibilities. What he claims is true for evolving sensors and devices is equally true for evolving artificial cultures.

The experimenter sets a bound on the possible interactions between the agent and the environment… otherwise the simulation would become computationally intractable… Novel sensors are constructed when a device, rather than an experimenter, determines which of the infinite number of environmental perturbations act as useful stimuli. (Bird, p. 3)

Just as epistemically autonomous devices stand midway between software and natural evolution, autonomous robotic cultures stand between artificial and natural cultures. There are other middle grounds that include real living human players in experimental evolutionary simulations, optimizing the richness of both real and synthetic worlds. Non-human neurons have been recruited as the brains of several robots (Ayers) and there have been significant advances in growing human neurons on microchips, the neuron-silicon interface. Cyborg mediated culture seems far off and the "soul-catcher chip" remains apocryphal lore.

Warfare is a cultural activity.  It is arguably unnecessary as we enter the age of “Neocortical Warfare – The Acme of Skill” (Szafranski), but it is still a unique facet of human interaction where “getting it right, and quickly” is at a premium:  “seconds to decide and no second chance,” to quote the tag-line on an inside front cover advertisement in MILITARY SIMULATION AND TRAINING (MS&T), a leading journal dedicated to immersive decision-rendering-under-duress computer simulation applications for the defense community.

STRICOM 1995 page 93

STRICOM, the US Army’s Simulation, Training and Instrumentation Command, under the accompanying logo of a soldier at the center of three concentric circles and the quizzical motto “All but war is simulation,” considers three components to be central to creating a synthetic training environment:

·         Live – Operations with real people and equipment in the field.
·         Virtual – Troops in simulators fighting in synthetic battlefields.
·         Constructive – Wargames, models and analytical tools.

This integration has yet to be achieved in social science for even peaceful facets of cultural activity, yet its relevance to our work is inescapable.  The true test of any model is its ability to unite theory (purely computational experiments), with human players engaged in the entailments of that theory (automated human-artificial interactions), with human actors grounded firmly in reality. 

In a VIP brief on STOW, the Army’s Synthetic Theater of War, the program manager, Rae Dehncke characterized a shift in the simulation, theory and practice of combat in the last half century as follows (STOW 1998, slide3):

  • 1960s-70s:  Attrition Warfare – a function of force – cold war scenario – large scale coalition forces - aggregate level – attrition based simulations.
  • 1980s-90s:  Maneuver Warfare – a function of force and space – emphasis on joint operations – moving toward non-linear warfare – aggregate level – joint interoperable simulations.
  • 21st Century:  Revolution in Military Affairs – a function of force, space, information – asymmetric warfare – precision weapons, smaller forces, maneuver – entity level – synthetic battlespace.

This shift foregrounds entity over aggregate levels of simulation, analysis and command.  That is to say that it focuses on tactical and strategic planning from the level of the individual agent on up, rather than on the aggregate level of the squad, platoon or company on up.  Again, this shift tracks an analogous shift in the new sciences of multiagent spatial simulation.

The Institute for Creative Technologies (ICT) inaugurates a new collaboration between the US Department of the Defense and the entertainment industry.  In a new twist on the concept of the military-industrial complex, a military-entertainment complex has formed between the Army and Paramount Studios, producers of the Star Trek series.  The Holodeck, in which anything imagined could be brought to life in simulation for entertainment, science and sex, is much more than a metaphorical goal of this joint project. Chuck Drisafulli summed up the new collaboration:

The gold standard of virtual reality (VR) applications remains the fully immersive "holodeck" recreational facility seen on "Star Trek: The Next Generation." While such smooth instantaneous, thoroughly convincing environments certainly will not be available in the foreseeable future, VR technology is catching up with the human imagination. "Constructing something as satisfying to all the senses as the holodeck (on 'Star Trek') is probably a project we could spend most of this century working on." admits Paul Debevec, executive producer of graphics research at the USC Institute for Creative Technologies (see Q & A on page S-10). "But our five-year mission is to get a lot of the early technology in place. We should be able to get a usable system, where the level of reality and interaction is good enough that people can reasonably suspend disbelief."

With that goal in mind, the Creative Technologies facility was launched last August with a $45 million grant from the U.S. Army. There, specialists from academic and entertainment backgrounds pursue open-ended research that will eventually develop military, industrial and entertainment applications.  (Chrisafulli)

ICT Mission Rehearsal Exercise

One of the showpieces at the ICT is the Mission Rehearsal Exercise, or MRE (ICT www).  A lieutenant trainee enters an approximation of the Holodeck, in this instance a small theater.  His mission is to lend support to a team securing an armory with an unruly crowd gathering outside.  He is interrupted in this mission when he encounters an accident between a Humvee under his command and a passenger vehicle.  A young boy, a passenger in the car lays unconscious in the street; his mother kneels beside him on the verge of panic.  The lieutenant needs to interview the sergeant on the scene to decide what action he should take.  Through a mixture of technologies, including projected virtual reality, speech synthesis and recognition, scripted plots and dialogs, emotional algorithms, plan generation, artificial intelligence, and branching story lines, the narrative continues through to a range of outcomes for overall mission and the injured child.  The automated actors perceive and interact with one another: squad leaders carryout their orders while crowds gather and the media arrives.  Peacetime scenarios are also in development, including one on the strategy and tactics of natural disaster recovery.  Eventually, their intention is to merge theater with a suite of similar technologies such as actor position and gesture recognition, physical vehicle simulators, and flat screen projections in a theatrical set.  The bottom line is to approximate the Holodeck with consumer-off-the-shell (COTS) products available from the mass market, and to return some of the products of their research to the market as general-purpose human negotiation games.

In return for the inspiration the ICT gained from the world of scripted and interactive fiction, it plans to return some of the results of its research, initially in console computer games. It has contracted to develop two consumer market games:  C-Force (Future Combat Systems) and CS-XII (Quicksilver Software).  The games are expected to teach any player, military or civilian, how to leverage and negotiate human plans, resources and information, skills that should be helpful in any walk of life.

Karl Sims' Evolved Virtual Creatures

The coevolution of mind, body and behavior has convincingly been demonstrated by Karl Sims with his virtual creatures evolved on a Thinking Machines massively parallel Connection Machine 2.  The worlds he created began with an artificial physics, a primordial soup of elemental shapes, morphologies, articulations and sensors, neurons and effectors, and the mechanisms of evolution through natural selection.  Creatures were evolved, selected for different behaviors by giving higher fitness values to those who could swim, walk, jump and follow fastest.  Other species of creatures were evolved to take control of a cube during a hockey-like face off.  Different lineages developed different strategies and the best from each lineage was pitted against the best from another species in a final competition, which was rendered as a silent moving visualization.  The resultant interactions are disarming and surprising.  The creatures each have personalities so endearing that audiences routinely regard them as they would cute and mischievous pets.  As compelling as this work is in exhibiting the power of evolution, it is equally troubling in trying to understand the cognitive structures.  Speaking of the brain for an evolved reciprocally counter rotating paddle swimming creatures he writes:

Note that it can be difficult to analyze exactly how a control system such as this works, and some components may not actually be used at all.  Fortunately, a primary benefit of using artificial evolution is that understanding these representations is not necessary…  A control system that actually generates ‘intelligent’ behaviour might tend to be a complex mess beyond our understanding.  (Sims, pp 302- 319)

David Fogel presents an absorbing and enchanting tale of a personal quest for the deeper meaning of AI: the discovery of how intelligence itself arises.  Fogel seizes the challenge by capturing the evolutionary process and shaping it to breed a checkers expert from an artificial neural net despite many obstacles intentionally thrown its way.  His book BLONDIE24 is an inspiring, clear and witty narrative of the growth of a synthetic sentience inside a desktop PC.  Fogel comes to the same conclusion as Sims.  He cannot tell you exactly how it works.  It learned its expertise entirely by playing checkers.

John Koza has shown the same enigmatic phenomenon in programs evolved to manage human tasks through genetic programming (GP).  Although the program solutions are often elegant, the programmatic steps through which the programs evolved to produce them often defy deconstruction and human understanding, refusing to be reversed engineered.  Koza also took on the age-old argument that computers can only do what they are told to do, and can thus never do anything imaginative or original.  If computers are told to evolve solutions to problems they can be innovative and competitive with human-produced results.  As evidence for his claim that GP can accomplish this, he cites ten examples of success using only one of eight criteria.  That single measure is, perhaps, the most interesting of all, recruiting the expertise of the U.S. Patent Office as the final arbiter:

The result was patented as an invention in the past, is an improvement over a patented invention, or would qualify today as a patentable new invention.  (Koza 1999, page 5.)

What are these mind-building, mind-boggling, algorithms that defy decomposition?  Space really does not allow a thorough explanation other than to say that they, invoking evolution, are all extremely powerful creative and imaginative processes.  In an edited volume dedicated to the pioneers of evolutionary computation, Fogel presents a balanced historical perspective on its origins in the 1950s through a series of landmark papers in the field (Fogel 1998).  A ten-pound definitive big book on the subject, the HANDBOOK OF EVOLUTIONARY COMPUTATION (Bäck), provides extensive background to the methods, theories, applications, prospects and personalities of the major schools of practice in this field although a smaller introduction may be more accessible (Fogel 1995).  This is a quickly changing field and the reader is encouraged to search the Web and on the following keyword phrases to turn up a host of books, conferences, proceedings and papers on these subjects.

Evolutionary Computation
·         Artificial Life, Society and Culture
·         Genetic Algorithms
·         Neural Networks
·         Evolutionary Programming
·         Cultural Algorithms
·         Genetic Programming

A Lego® Cultural Invention System Challenge

The components of artificial culture, with a nod to Mindstorms™, are easily within reach of everybody with a desktop box and a research inclination.  They may be taken from discursive theories already in the literature, advanced applications like the ones I have reviewed, a software platform for developing ideas and access to some of the more common programming functions.  I have mentioned several models embracing different facets of cultural evolution from the academic, industrial, military and entertainment sectors as proofs that these all exist in separation.  The challenge for understanding HUMAN COMPLEX SYSTEMS is to unite aspects of these with discursive social theories in an appropriate mix.  A new peer-reviewed Web publication, which encourages collaborative transdisciplinary work, is the JOURNAL OF ARTIFICIAL SOCIETIES AND SOCIAL SIMULATION.  It is an excellent resource for new ideas and models that cross distinct domains in social science.

Among anthropologists, cultural evolution finds theorists among authors interested in systems integrating material embodiment in technology, situatedness in social and physical environments and epigenetic processes of interaction among subsystems.  Owen Lovejoy, in “The Origin of Man argues that the prime requisite for culture arose from man’s unique sexual and reproductive behavior.   The other interlocking threads in that scenario are an expanding neocortex, bipedality, a particular dentition and material culture.  Marvin Harris has repeatedly championed the cause of building a unified scientific theory.  His influential RISE OF ANTHROPOLOGICAL THEORY is an account, which optimistically chronicles an anticipated renaissance in scientific ways of knowing culture.  This was followed by his detailed strategy of CULTURAL MATERIALISM, subtitled THE STRUGGLE FOR A SCIENCE OF CULTURE.  Science has not enjoyed the place in anthropology that was expected.  Postmodernism’s influence was to privilege subjective interpretation over the quest for any form of objectivity.  Thirteen years after the publication of his book on the “rise” of theory and “after three decades of intellectual warfare down among the anthropologi,” (Harris 1999, p. 13.) he published a new supplemental volume, tempted to call it “The Fall of Anthropological Theory,” but tempered to THEORIES OF CULTURE IN POSTMODERN TIMES.  Harris, though a cultural materialist among cultural anthropologists, is not sufficiently materialist to offer detailed insights on material culture, the artifacts and architectures that provide the only evidence of early cultural evolution through archaeology.  In archaeology, Lewis Binford has been equally influential in promoting the “processual explanatory imperative” which in the study of culture was the core of “processual archaeology:”  

It will soon be forty years since I observed that our task was to “explicate and explain the total range of physical and cultural similarities and differences characteristic of the entire spatial-temporal span of man’s existence.”  I went on to say, so long ago, that by explanation I meant “the demonstration of a constant articulation of variables within a system and the measurement of the concomitant variability among the variables within the system.”  (Binford, page 400.)

CULTURE AND THE EVOLUTIONARY PROCESS (Boyd) and THE EVOLUTION OF HUMAN SOCIETIES (Johnson) present much more detailed evolutionary scenarios.  In much of science-oriented anthropology there are allusions to the processes of systems theory, emergence and multiagent spatial modeling.  Embodiment, situatedness and epigenetic processes are implied, but not in those terms.  What we need now are re-presentations of these theories as computational objects and their evaluation against empirical data.

A software platform is as essential to building models as a word-processor is to writing papers.  I have avoided many preprogrammed simulation packages because, despite their initial ease of use, they lock their user into circumscribed sets of simulations.  Dissatisfied, she will eventually want to break free of these limitations and revert to a mainstream programming languages in which to exercise the full creativity of her thoughts.  Why invest that time in partial modeling solutions when with little added effort she can tap a world-class suite of functionality to include in her simulation?   I have settled on Borland C++ Builder as that platform, a visual integrated development environment for a world-class language, though I keep an eye on Java and C#.  Visual programming takes the drudgery out of writing code.  Buttons, bars and boxes are dragged-and-dropped easily onto the WYSIWYG editing window allowing the programmer to ignore the arcane routines underlying the Windows operating system in order to concentrate directly on the logic of the simulation.  With hands-on instruction and only a small subset of C++ and a few graphical commands, the uninitiated can build an application in a week.  The graphics help to visualize the behavior of the simulations as it is developed.  A little C++ prepares modelers to go on to marshal routines from other sources:  algorithms, code snippets and examples published everywhere on paper and the Web.

To sum up, complexity is not easily captured by discursive, written or mathematical formalisms.  Computer models of artificial culture have the intrinsic property of being truer representations of elemental cultural processes and objects than discursive representations (in spoken and written natural language).  Moreover, unlike natural language, computer simulations generate unambiguous entailments.  Consequently, one solution is to build the culture from the bottom-up in models and to explore their hyperspace of possibilities through simulations.  It is my contention that artificial culture should be conceived synoptically as a complex of materially embodied and spatio-temporally situated agents in epigenetic adaptation to the constraints of their social and physical environments.  Artificial culture so conceived should enable us to explain not just:

  • shared beliefs, but also disparate beliefs.
  • thought as discourse, but also as society of mind.
  • rationality, but also evolutionary psychology.
  • global knowledge, but also local knowledge.
  • equilibria, but also dynamic disequilibria.
  • broadcasting, but also networking.
  • information, but also disinformation.


The social sciences remain accused of physics envy, the longing for an orderly cycle of theory building, hypothesis revision and repeated lab experiment.  Experiments in academic social science are rare, ranging from role-playing live human simulations to the design of new policies of social change.  All are fraught with great financial costs, procedural problems and ethical concerns, not to mention the consequences stemming from the fact that the human subjects are aware of the experiment, have their own agendas and have something to say about the outcome.  Computer models allow us to avoid some of these risks: to experiment with policy in simulation until we get it right, to look inside the minds of all the actors in the game to see what would be hidden in real life, and to retrodict, predict into the past to study conterfactuals without relying on an historian to have collected data for an historical experiment he could not even have imagined.  Until the time comes when we require human subjects review committees to sit in judgment over our computer worlds, we are freed from many ethical constraints.

We have no comprehensive single theory of computation:

“I reject…  that computation can be defined.  An adequate theory (of computation) must make a substantive empirical claim about what I call computation in the wild… ‘With a nod to Hutchins’ Cognition in the Wild (1995):’ that eruptive body of practices, techniques, networks, machines, and behavior that has so palpably revolutionized late-twentieth-century life.”  (Smith 6)

Discursive theory in the social sciences may well be subsumed by models and simulations:

“Some distinctions are being opened up, such as between determinism and predictability, in part because of intrinsic computational limits.  Other distinctions are collapsing, such as those between and among theories, models, simulations, implementations, and the like.”  (Smith 360.)

Danny Hillis, inventor of the massively parallel Connection Machine 5 prominently featured in Jurassic Park, tells the enchanting tale of the coming of Thinking Machines in THE PATTERN IN THE STONE – THE SIMPLE IDEAS THAT MAKE COMPUTERS WORK (1998).  We don’t know, he says, where all this will lead. 

I think the question is, "What are the humans going to do?"  And I don't think we can answer that exactly, except to say that we're the first generation that can't quite imagine what our children's jobs will be, and also that our parents can't quite imagine what our jobs are.  That's the important point.  Imagine if I went back in time a few hundred years and tried to explain what I do and for a living:

"I take a piece of stone and I etch these symbols into it, in a very elaborate and complicated pattern that not anybody can quite understand.  Then I take the stone and I put invisible forces into it and I tell the stone what to do, using a language, which is not English, which only a few people understand.  But if I tell it exactly right, then I can ask it questions and it will tell me the answers…" 

In my home state of Massachusetts, I think I would be burned at the stake.  (1995)

To my mind the lesson to be taken home is clear:  We should rethink theory, description and explanation in the light of simulation.  We should rethink what it means to do research.  We must become more fluent in the languages of representation, fluent in thinking, reasoning and communicating through programming languages.

Intellectually and physically, we stand on the horizon of the age of the posthuman (Hayles), a new human subjectivity engendered by two new converging ways of knowing: the evolutionary and the computational.  We know that the thinking parts of us, and also other thinking things, have evolved through natural selection as adaptations to the world they live in.  We know how to make things think.  We know the creative forces that created us, and how to harness them in our machines.  The future possibilities are limitless, wide open.

Works Cited

Axelrod, Robert.

Ayers, Joseph et al.

Bäck, Thomas, David B. Fogel, Zbigniew Michalewicz, editors. 
1997        HANDBOOK OF EVOLUTIONARY COMPUTATION.  Institute of Physics Publishing, Bristol and Oxford University Press, Oxford.

Ballard, Dana H. 
1997        AN INTRODUCTION TO NATURAL COMPUTATION.  A Bradford Book, MIT Press, Cambridge.

Barkow, Jerome, Leda Cosmides and John Tooby. 

Baum, Dave, Michael Gasperi, Ralph Hempel and Luis Villa.
2000        EXTREMEME MINDSTORMS™ – AN ADVANCED GUIDE TO LEGO® MINDSTORMS™.  Springer-Verlag, New York and Apress, Berkeley.
2000        DEFINITIVE GUIDE TO LEGO® MINDSTORMS™.  Springer-Verlag, New York and Apress, Berkeley.

Bentley, Peter (editor).
1999        EVOLUTIONARY DESIGN BY COMPUTERS.  Morgan Kaufmann, San Francisco.

Binford, Lewis R. 

Bird, Jon and Paul Layzell.
2001        “The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors.”  Manuscript, Centre for Computational neuroscience and Robotics, University of Sussex, Brighton.

Boden, Margaret A.
1996        (editor)  THE PHILOSOPHY OF ARTIFICIAL LIFE.  Oxford University Press, Oxford.

Borland C++ Builder.

Boyd, Robert and Peter J. Richerson. 
1985        CULTURE AND THE EVOLUTIONARY PROCESS.  Unversity of Chicago Press, Chicago.

Brooks, Rodney A.
1999        CAMBRIAN INTELLIGENCE – AN EARLY HISTORY OF THE NEW AI.  A Bradford Book, MIT Press, Cambridge

Casti. John L.

Chrisafulli, Chuck.
2000        “Virtual Reality Under Construction.”  THE HOLLYWOOD REPORTER – FX SERIES SIGGRAPH SPECIAL ISSUE, pages 9s-11.

Churchland, Patricia S.
1994        THE COMPUTATIONAL BRAIN.  MIT Press, Cambridge.

Churchland, Paul M.
1996        THE ENGINE OF REASON – THE SEAT OF THE SOUL.  A Bradford Book, MIT Press, Cambridge.

Epigenetic Robotics.

Fogel, David B. 
2002        Blondie24 – PLAYING AT THE EDGE OF AI.    Morgan Kauffman, San Francisco.
1998        (editor).  EVOLUTIONARY COMPUTATION – THE FOSSIL RECORD.  IEEE Press, New York.

Ford, Kenneth M, Clark Glymour and Patrick. J. Hayes (editors).
1995        ANDROID EPISTEMOLOGY.  AIII Press, Menlo Park, MIT Press, Cambridge.

Forrester, Jay
1971        “Counterintuitive Behavior of Social Systems.”  SIMULATION, February, pp 61 ff.

Future Combat Systems
2003        C-FORCE.  X-Box, Game Cube, Playstation 2 platforms.

Gessler, Nicholas.
1997        "Book Review: Growing Artificial Societies by Joshua Epstein and Robert Axtell, MIT Press, Cambridge 1996." ARTIFICIAL LIFE, 3:3: 237-242.
1995        "Ethnography of Artificial Culture: Specifications, Prospects, and Constraints." EVOLUTIONARY PROGRAMMING IV, PROCEEDINGS OF THE FOURTH ANNUAL CONFERENCE ON EVOLUTIONARY PROGRAMMING. John R. McDonnell, Robert Reynolds and David Fogel, eds. A Bradford Book, MIT Press, Cambridge. Pages 319-331.
1994        "Artificial Culture." ARTIFICIAL LIFE IV - PROCEEDINGS OF THE FOURTH INTERNATIONAL WORKSHOP ON THE SYNTHESIS AND SIMULATION OF LIVING SYSTEMS. Rodney Brooks and Pattie Maes, eds. MIT Press, Cambridge.  Pages 430-435.

Grand, Stephen.
2001        CREATION – LIFE AND HOW TO MAKE IT.  Harvard University Press, Cambridge.

Harris, Marvin.
1999        THEORIES OF CULTUE IN POSTMODERN TIMES.  AltaMira Press, Walnut Creek.

Hayles, N. Katherine.

Henrickson, Leslie.
2001        “Trends in Chaos and Complexity Theories and Computer Simulation in the Social Sciences.”  Manuscript, Social Sciences and Comparative Education, Graduate School of Education and Information Studies, University of California – Los Angeles.

Hillis, W. Daniel. 

Holland, John. 
1998        EMERGENCE – FROM CHAOS TO ORDER.  Addison-Wesley, Reading.


Hutchins, Edwin.
1995        COGNITION IN THE WILD.  MIT Press, Cambridge.

ICT - Institute for Creative Technologies.
www      “Mission Rehersal Exersize.”

INTEL 4004

Intel Corporation.

Johnson, Allen w. and Timothy Earle. 


Koza, John R., Forrest H. Bennett III, David Andre and Martin A. Keane (editors).

Lem, Stanislaw. 
1982        SUMMA TECHNOLOGIAE (translation in German from Polish).  Suhrkamp Taschenbuch 678, Germany.
1979        “Non Serviam,” in A PERFECT VACUUM.  A Harvest Book, Harcourt Brace Jovanovich, San Diego.  Pages 167-196.  Excerpted and edited by Nicholas Gessler.

Lovejoy, Owen.
1981  “The Origin of Man.”  In SCIENCE, January 23, Volume 211, Number 4480, Pages 341-350.

Marx, Karl.
1852 “The Eighteeth Brumaire of Louis Bonaparte.”  In: KARL MARX AND FREDERICK ENGELS: SELECTED WORKS, VOL. 1. Progress Publishers, Moscow, 1973 edition.

Miller, James Grier. 
1978        LIVING SYSTEMS.  McGraw-Hill, New York.


Minsky, Marvin.
1996        Public Lecture delivered at the Artificial Life V conference in Nara, Japan, May 16.  Transcribed, excerpted and edited by N. Gessler from the videotape by Katsunori Shimohara.
1986        THE SOCIETY OF MIND.  Simon and Schuster, New York.

Morgan, Darin.
1996        WAR OF THE COPROPHAGES.  Kim Manners (director).  X-FILES episode.  Twentieth Century Fox.

Morris, Errol.
1997        FAST, CHEAP AND OUT OF CONTROL.   Errol Morris, Director/Producer.  Sony Pictures Classics.

Nolfi, Stefano and Dario Floreano.

Picard, Rosiland W.
2000        AFFECTIVE COMPUTING.  MIT Press, Cambridge.

Proyas, Alex and Lem Dobbs and David S. Goyer (screenwriters).
1998        DARK CITY.  Alex Proyas (Director).  New Line Cinema. 

Quicksilver Software

Rusnak, Josef and Ravel Centeno-Rodriguez (screenwriters). 
1999        THE THIRTEENTH FLOOR.  Based on the novel SIMULACRON 3 by Daniel F. Galouye.  Columbia Pictures (1999). 

Sims, Karl.
1999        “Evolving Three-Dimensional Morphology and Behaviour.”  In Bentley (1999) EVOLUTIONARY DESIGN BY COMPUTERS, pages 297-321.

Smith, Brian Cantwell. 
1996        ON THE ORGIN OF OBJECTS.  A Bradford Book, MIT Press, Cambridge.

www      ACTION PLAN
1998 slide 3

1995        COMMAND FORECAST – FY 1996-2000, U.S. Army, November.

Szafranski, Richard.
1994        “Neocortical Warfare? – The Acme of Skill.”  In MILITARY REVIEW, Volume LXXIV, Number 11, November, pages 41-55.

Taylor, Charles and David Jefferson.
1994        "Artificial Life as a Tool for Biological Inquiry." In ARTIFICIAL LIFE. Volume 1, Number 1/2, 1-13. MIT Press.

Thakoor, Sarita and Adrian Stoica.
1998        “Biomorphic Explorers - Exploratory Robots Would Feature Animallike Adaptability and Mobility.”  NASAs Jet Propulsion Laboratory, Pasadena.

1950        “The Thinking Machine.”  Volume LV, number 4, January 23.  Pages cover, 54-60.

Von Neumann, Paul M. Churchland, Patricia Smith Churchland and Klara Von Neumann.
2000        THE COMPUTER AND THE BRAIN.  Yale University Press, New Haven.

Wright, Robert.