The future of control

Artificial life in film – the permanence of simulations

In his book Out of Control, now available in German under the title "The end of control" published by Bollmann Verlag, Kevin Kelly, editor-in-chief of WIRED, tries to work out the basics of a bio-technological civilization. Our technologies, he argues, are becoming more and more like biological organisms and more self-reliant. Therefore, there will be an end of control. In his chapter on Artificial Life in Film, Kelly illustrates how the future could play out between humans and their creatures, beginning with animated cartoon characters and ending with synthetic actors.

True-to-life simulations – the physics of cartoon characters

The best thing about the dinosaurs in the movie Jurassic Park is that they have enough artificial life to be reused as cartoon dinos in a Flintstones movie.

They will not, of course, be exactly the same. They will be tamer, longer, rounder and more obedient. But in the dino will beat the digital heart of the Tyrannosaurus and Velocyraptor – different bodies, but the same dinosaurism. Mark Dippe, the Industrial Light and Magic wizard who invented virtual dinosaurs, only needs to tweak the creatures’ digital genes to turn them into cuddly pets without losing any of their compelling screen presence.

Nevertheless, the dinosaurs from Jurassic Park Zombies. They have wonderfully simulated bodies, but they lack an independent behavior, an own will, an own drive to survive. They are ghostly stick puppets guided by computer animators. One day, however, the dinosaurs may become Pinocchios, which have a life of their own.

Before the dinosaurs of the Jurassic Park When people were introduced to the photorealistic world of a feature film, they lived in an empty world consisting of only three dimensions. In this dreamland – let’s imagine it as the place where the flying logos of the television stations live – there is sound, light and space, but not much else. Wind, gravity, traction, friction, stiffness, and all the little idiosyncrasies of the material world are missing and have to be artfully created by imaginative animators.

"In the traditional animation technique, all knowledge about motion sequences must come from the mind of the artist", Says Michael Kass, computer graphics developer at Apple. For example, when Walt Disney drew Mickey Mouse bouncing down the stairs on her butt, Disney was depicting on drawing paper the laws of gravity as he perceived them to operate. Mickey obeyed Disney’s ideas of the motion sequences, whether they were realistic or not. Most of the time they were not, but that has always been their charm. Many cartoonists exaggerated, changed or ignored the physical laws of the real world to have the laughs on their side. But the current style of cinema is rigorously based on realism. The modern audience wants to see E.T.‘s flying bike like a "real" flying bike reacts and not as if it came from an animated movie.

Kass is trying to incorporate physics into the simulated worlds. "Remembering the tradition that the draftsman had the laws of physics in his head, we decided that instead the computer should have some knowledge of physics."

Let’s start for instance with the dreamland of flying logos. One of the problems with this simple world, according to Kass, is the fact that "the things look like they have no weight at all". To make this world more realistic, we were able to add mass and weight to the objects and gravity to their environment, so that a flying logo that crashes to the ground falls with the same acceleration as a real logo that plops to the ground. The gravitational equation is very simple, and planting it in a small world is not difficult. We were able to add a jumping formula to the animated logo so that it would naturally "under its own power" naturally jumps back from the ground. It follows gravity and the rules of kinetic energy and friction that slow down its movement. It can also be given a certain rigidity – like plastic or metal – so that it reacts realistically in a collision. The end result gives the impression of a reality in which a chrome logo hits the ground and recoils in ever-smaller sentences until it comes to a clattering halt.

We could continue applying additional formulas of physical laws, such as elasticity, surface tension or rotation effects, and inscribe them into the environment. By increasing the complexity of artificial environments, they become fertile ground for synthetic life.

That is why the dinosaurs of the Jurassic Parc so lifelike. When they lifted their legs, they had to overcome their virtual body weight. Their muscles tightened and relaxed. When they touched the ground again, it was for reasons of gravity, and the appearance of the leaves continued in the movement of their legs.

The talking cat from the Disney movie Hocus Pocus, which was released in the summer of 1993, was a virtual figure resembling dinosaurs – but in close-up. The animators built a digital cat shape and "paused" her fur grains were taken from a photographed cat, which she was confusingly similar to, except for her remarkable speeches. their mouth movements were copied from a human being. The thing was a virtual cat-human hybrid.

A cinema audience sees autumn leaves blowing down the strabe. The audience does not notice that the scene is a computer animation. The action looks real because the video consists of something real: individual virtual leaves blown down a virtual street by a virtual wind. As with Reynolds’ virtual flocks of bats, there are a myriad of real things that are actually propelled forward by a force in a place where physical laws apply. The virtual blades have properties such as weight, shape and surface area. When exposed to a virtual wind, they follow a set of laws analogous to the real laws that real blades follow. The relationship of all the parts to each other is as real as a day in New England, although the lack of detail does not allow for believable close-ups. The blown leaves are not so much drawn as simply let loose.

Letting animations follow their own physics is the new recipe for realism. When the Terminator II When a computer rises from a pool of molten chrome, it is surprisingly convincing, because the chrome obeys the physical constraints of fluids (such as surface tension) in a similarly functioning universe. It’s a fluidity, as a simulation.

Kass and Gavin Miller, his colleague at Apple, designed computer programs that could reproduce in detail the way water runs down a shallow stream or falls as rain into a pipe. They equipped a simulated universe with the laws of hydrology by interfacing the formulas with an animation engine. Their video clips show a flat wave rolling over a dry sandy crust in soft light, breaking in the irregular manner of real waves, then receding and lapping up wet sand. In reality, these are just equations.

Design of the living

To make these digital worlds actually work in the future, everything created for them must be reduced to equations. Not only dinosaurs and water, but also the trees on which the dinosaurs chew, the jeeps (which are used in some scenes of the film), and the dinosaurs themselves Jurassic Park digital), buildings, clothes, tables and the weather. If this was done only for the sake of the films, it was not done at all. But every manufactured piece will be designed and produced in the near future with the help of computers. Already today, parts of automobiles are first simulated on computer screens; later, the corresponding equation is transmitted directly to the lathes and welders in the factories, who give the figures their final shape. A new way of industrial production, called fully automated production, receives the data from the CAD application and immediately creates a three-dimensional prototype in metal or liquid plastic. At first, an object consists only of lines on the screen; then it is a solid object that you can hold in your hand and walk around. Instead of printing the drawing of a circuit, "prints" the automatic production the actual circuit itself. In urgent cases, spare parts for manufacturing machines are now copied in extra-hard plastic directly from the factory; they last until the original spare part arrives.

In the not too distant future the copied item will be the original spare part. John Walker, the founder of the world’s first CAD program, Auto-CAD, told a reporter: "CAD is about building models of real world objects in the computer. I believe that in due time every object in the world, whether it is man-made or not, will be modeled in the computer. This is a very, very rough market. It includes everything."

Including biology. Flowers can already be modeled in computers. Przemyslaw Prusinkiewicz, a computer scientist at the University of Calgary in Canada, uses a mathematical model of plant growth to create three-dimensional virtual flowers. Obviously, the gross part of plant growth follows a few simple laws. The signs of flowering may be complicated. The flowering process may be determined by several interacting messages. But these interacting signals are relatively easy to encode in a program.

The mathematics of plant growth was elaborated in 1968 by the biotheorist Aristid Lindenmeyer. His equations formulated the difference between a carnation and a rose. It labs to reduce to a set of variables in a numerical seed. A full plant may occupy only a few kilobytes on a hard drive – as small as a seed. When the seed is unpacked by the computer program, a flower graphic grows on the screen. First a green cotyledon sprouts, leaves unfurl, a bud takes shape, and then, at the right moment, the blood rises. Prusinkiewicz and his students have plowed through the botanical literature to find out how flowers with multiple inflorescences bloom, or how a ganse flower is formed, or how an elm or oak tree develops its typical branch forks. They also compiled algorithmic growth laws for hundreds of sea shells and butterflies. The graphical results are completely convincing. The still image of one of Prusinkiewicz’s computer-grown Spanish lilac branches with its myriad of bleeds could be used as a photo in a seed catalog.

At first it was a fun academic pursuit, but now Prusinkiewicz is beset by horticulturalists who all want his software. They spent a lot of money on a program that shows their customers what their garden designs will look like in ten years or as early as next spring.

The best way to copy a living creature, Prusinkiewicz thought, is to breed it. The growth laws he extrapolated from biology and then put into a virtual world are used to grow trees and flowers for movie sets. As an environment, they are wonderful for dinosaurs and other digital characters.

BrØderbund Software, a venerable publisher of educational software for personal computers, distributes a program that models physical forces – a new learning method for teaching physics. When you run the physics program on your Macintosh, it launches a miniature planet that orbits the sun on your computer screen. The virtual planet obeys the forces of gravity, motion and friction inscribed in the miniature universe. By playing with forces such as momentum and gravity, students can develop a sense of how the physics of the solar system works.

How far can we go with this? If we added more and more forces that the miniature planet had to obey – such as electrostatic attraction, magnetism, friction, thermodynamics and volume – if we integrated every single phenomenon that we see in the real world into this program, what kind of solar system would we end up with in the computer?? If you use a computer to design the model of a bridge – with all the forces acting on it through the steel structure, through wind and gravity – could we ever get to the point where we could say we had a bridge in the computer?? And love to do the same with life?

As fast as physics advances into the digital worlds, life encroaches on them even faster. To see how far generated life has seeped into computerized cinema and with what consequences, I took a trip through state-of-the-art animation studios …

The birth of a synthespian

Mickey Mouse is one of the ancestors of artistic life. Mickey, who is 66 years old today, will soon have to face the digital age. In one of the permanent "provisional" buildings in a remote part of Disney’s Glandale Studios, the board is carefully planning the automation of animated characters and backgrounds. I spoke with Bob Lambert, the head of technology research in Disney’s animation department.

The first thing Bob Lambert made clear to me was that Disney was in no hurry to fully automate animation. animation was a craft, an art. Disney’s greatest asset lay in these skills, and its crown jewels – Mickey Mouse and his friends – were regarded by the public as exemplary works of art. If computer animation had anything to do with the wooden robots children see on Saturday morning cartoons, Disney would have nothing to do with it. Lambert: "We can do without people saying, ‘There goes another craft swallowed by the black hole of computers."

The second thing Lambert wanted to make clear was that the making of the legendary Disney films had already been partially automated since 1990. Step by step they digitized their worlds. Their animators understood that those who did not transfer their artistic intelligence from the head to an almost living simulation soon became dinosaurs in another sense. "Honestly", Lambert continued, "Since 1992 our animators have been clamoring to be able to use computers."

The giant clockwork in the Disney movie The rough mouse detective Was the computer-generated model of a clock over which hand-drawn figures ran. In Rescuers Down Under the albatross Oliver immersed himself in a virtual New York, a fully computer-generated environment created from a crude database of New York buildings compiled by a crude construction company out of commercial interest. And in The Little Mermaid Ariel encountered simulated shoals of fish, seaweed that swayed back and forth on its own, and air bubbles that rose thanks to physical laws.

The first completely paperless Disney character was the flying (walking, signing, jumping) carpet in Aladdin. To create it, the shape of a Persian carpet was reproduced on a computer screen. The animator set his settings by moving the cursor, the computer then built the "intervening" Single images on. Then the digitized carpet movement was merged into the digitized version of the hand-drawn film. In King of the Lowen there are several animals that look like the dinosaurs from Jurassic Park are computer-generated, including some animals with halfway independent behavior patterns of herds or swarms. At the time I wrote this book, Disney was working on the first fully digital animated film. It is primarily the work of former Disney cartoonist John Lassiter. Almost all of the computer animation was done by Pixar, a small, innovative studio located in a converted industrial park in Richmond Point, California.

I paid a visit to Pixar to see what kind of artistic life was hatched there. Pixar has won four contest awards for shorter computer animations Lassiter has made. Lassiter loves to animate objects that are normally inanimate – a bicycle, a toy, a lamp or knickknacks sitting on a shelf. While Pixar’s films are considered by computer graphic artists to be the most technically advanced computer animations, at least the drawn portion is handmade here as well. Instead of a pencil, Lassiter uses a cursor when painting to alter his objects, which are rendered three-dimensionally by a computer. If he likes a toy soldier to look sad, he moves the cursor to the happy face of the figure on the computer screen and pulls down the corners of the drawing’s mouth. After an initial assessment of the facial expression, he may decide that the toy soldier’s eyebrows should not drop quite so quickly or that his eyes blink too slowly. So he changes the digitized form with the help of the cursor. "I don’t know any other way to tell him how to do with his mouth, for example, like this", says Lassiter, twisting his mouth into an O as an expression of mock bluffness, "who would be even a little faster or better than to do it themselves".

I asked the artists at Pixar if they could at least imagine a self-standing computer character – chucking in a script draft and coming up with a digital, mischief-making Daffy Duck. Courageous serious denial and head shaking were the answer. "If typing a script into a computer was enough to bring a believable character to life, there would be no bad actors in the world", said Guggenheim. "But we know that not all actors are great. You can see Elvis and Marilyn Monroe actors anywhere, anytime. Why do we not fall for them? Because actors have a complex job to do. They need to know when to twitch the right corner of their mouth or how to hold a microphone. If such a thing is already difficult for a human actor, how should a computer script solve this task??"

The question being asked is one of control. It turns out that the business of special effects and animation is an industry of control freaks. They believe that the intricacies of acting are so minute that only a human supervisor can guide the decision-making of a digital or drawn character. You are right.

But tomorrow they will no longer be right. If computer performance continues to improve as it has, within the next five years we will see a character, a movie star, created by transferring synthetic behavior to a synthetic strong body.

To the dinos in Jurassic Park it became very clear that the artificial representation of bodies is almost perfect nowadays. The physical appearance of the dinos was visually indistinguishable from our expectation of a filmed dinosaur. A number of digital effects labs are currently assembling the ingredients for a believable digital human actor. One lab specializes in creating perfect digital human hair, another in getting the hands right, and still another in facial features. Digital figures are already being inserted into Hollywood movies (without anyone noticing) when a synthetic scene requires people moving in the distance.

Realistic clothing, which naturally falls and wrinkles, is still a challenge; if it’s not perfect, it gives the virtual person a phony expression. But initially, digital characters will be used in dangerous stunts or incorporated into mixed scenes – but only in long shots or crowd scenes, not in close-ups, which will be given coarser attention. A virtual human form that would be fully believable is a tricky business, but its realization is imminent.

However, we are still far from a convincing simulation of human movements, and even further from believable facial expressions. The final limit, according to the graphic experts, is human expressiveness. Efforts to control a human face have today taken on the proportions of a minor campaign.

Robot without "hard" Body

At Colossal Picture Studios in an industrial area on the outskirts of San Francisco, Brad de Graf works on imitating human behavior. Colossal is a little-known special effects studio that has worked on some of the best-known animated advertisements on television.

De Graf works in a cluttered office located in a converted warehouse. In several coarse rooms, two dozen coarse computer monitors shine in dim light. This is what an animation studio of the nineties looks like. The computers – high-performance graphic workstations from Silicon Graphics – display projects in various stages of work, including a fully computerized bust of rock star Peter Gabriel. Gabriel’s head shape and face have been scanned, digitized and assembled into a virtual Peter Gabriel, which can replace the body of the living man in music videos. Why waste time dancing around in front of cameras when you could just as easily be in the recording studio or lying by the pool? I watched an animator manipulating the virtual star. She tried to love Gabriel’s mouth by pulling the cursor to lift his jaw. "Whoops", she said as she pulled too far and Gabriel’s lower lip sailed up and invaded his nose. A shabby grimace.

I went to Graf’s studio to see Moxy the first fully computer animated character. On screen Moxy looks like a cartoon dog. He has a rough nose, a chewed off ear, two white gloves for hands and "rubber tube"-Poor. He also has an extremely funny voice. His movements are not drawn. They are stolen from a human actor. In one corner of the room, there is a homemade virtual reality-"Waldo". A Waldo (named after a character from an old science fiction story) is a device that allows a person to control a puppet from a distance. The first Waldo-controlled computer animation was an experimental Kermit frog moved by a hand-sized puppet Waldo. Moxy is a virtual character with a full body, a virtual puppet.

When an animator wants to make Moxy dance, he puts on a yellow helmet with a stick glued on its top. At the end of the stick there is a location sensor. The animator slips on shoulder and hoof sensors and then picks up two styrofoam plates that look like two very crude glove caricatures. He waves them around – they also have tracking sensors on them – while he dances. On the screen, Moxy the cartoon dog dances in step with the animator in his crazy cartoon room.

Moxy’s best trick is that he can automatically move his lips in sync. A recorded human voice streams into an algorithm that figures out how Moxy’s lips should move, and then moves them. The studio programmers enjoy making Moxy say all sorts of outrageous things in other people’s voices. Actually, Moxy can be moved in many ways. By turning knobs, typing commands, cursor movement, or autonomous behavior generated by algorithms.

This is the next level for de Graf and other animators: to give characters like Moxy individual movements – standing up, bending over, lifting something heavy – that can be put back together into a smooth, believable movement and then applied to a complex human figure.

Calculating the movement of a human figure is already possible for today’s computers, given enough time. But to do it on the fly, as the body does in real life, in a world that moves while you think about where to put your hub – such a calculation is almost impossible if the simulation is to be good. The human body has about 200 movable joints. The total number of possible positions a human figure with 200 moving parts can ame is astronomical. To scratch one’s nose in real life requires more computer power than is available to us in gross computers.

But this is not the maximum of complexity, because each body position can be taken in several ways. When I lift my fub to slip into a shoe, I direct my leg into the exact position through hundreds of combinations of thigh, lower leg, fub, and toe movements. In fact, the sequences of my limb movements while walking are so complex that there is room for millions of different realizations. Others can recognize me – often from three meters away, without seeing my face – solely by my unconscious choice of which muscles I use when walking. It is difficult to imitate the combinatorics of another person.

Researchers who try to simulate human movement in artificial figures quickly find out what the creators of Bugs Bunny and Porky Pig have always known: that some combinatorial sequences can be used to create a new kind of animation "nature" appear better than others. When bugs reach for a carrot, some of the arm’s paths toward the vegetable seem more human than others. (The behavior of Bugs, of course, does not simulate a rabbit, but a human being). And it depends a lot on the time sequence. An animated character that perfectly masters the sequence of human movements can still appear robotic if, for example, the relative speeds of swinging arm and striding leg are not matched. The human brain detects such falsehoods with ease. Therefore, the timing is another aspect within the complexity of movement.

Restless attempts to create artificial movement forced the technicians to take a close look at animal behavioral science. To construct vehicles with legs that could drive around Mars, researchers studied insects – not to learn how to build legs, but to figure out how insects coordinate the simultaneous movement of six legs.

"Self-contained motion sequences

In David Zeltzer’s lab at the MIT Media Lab, exam candidates developed simple stick puppets that "by themselves" over an uneven landscape could. The animals consisted of no more than four legs with a stick for a backbone. Each leg besab in the middle a joint. The students presented the "Animat" The user would then move his legs in a certain direction with the aim of tracking down the elevations and depressions in order to compensate for them with an adapted gait. The result was a remarkably convincing portrait of a creature walking across a rugged terrain. But unlike the traditional Road Runner cartoons, no human being decided where each leg was placed at any given moment in the frame. In a way, the figure decided it by itself. Zeltzer’s group eventually populated their world with autonomous six-legged animats and even got a two-legged something to walk down and back up a valley.

Zeltzer’s students assembled Lemonhead, a cartoon character that could walk on its own. His way of walking was more realistic and complicated than that of the stick puppet, because he consisted of a coarser number of body parts and joints. He could walk around obstacles, such as fallen tree trunks, in a realistic motion sequence. Lemonhead inspired Steve Strassman, another student in Zeltzer’s lab, to find out how far he could get in building a library of behaviors. The idea was to build a generic character like Lemonhead and give her access to a "Catalog" with clips of movement and behavior. Do you need a sneeze? You have a drive full of them!

Strassman wanted to give instructions to a character in plain English. You simply tell it what to do, and the character fishes out the appropriate behaviors from the "four food groups of behavior" out and combines them to the right sequence of a reasonable movement. If you command her to stand up, she will tell you that she must first pull the tube out from under the chair. "See", Strassman warns me before he begins his presentation, "this guy won’t write sonatas, but he will sit on a chair."

Strassman launched two characters, John and Mary. Everything happened in a plain room, into which one looked from a slanting angle above the ceiling – like the eye of God. "Desktop theater" called it Strassman. The initial situation was that the couple argued occasionally. Strassman worked on a farewell scene. He typed: "On the scene, John gets angry. Gruffly he holds out the book to Mary, but she rejects it. He slams it on the table. Mary rises, while John stares angrily ahead of him." Then he printed the start button.

The computer thinks about it for a second, then the characters on the screen open the piece. John frowns; his movement with the book is tight; he clenches his fist. Suddenly Mary stands up. End. There is no grace in their movements, nothing particularly human. And it’s hard to follow the fluent movements because they don’t attract attention. The feeling of being included is missing, but there, in the tiny artificial space, characters interact with each other according to a godly script.

"I’m a self-centered director", says Strassman. "If I don’t like the way the scene went, I’ll have it played again." So he enters a new variation: "In this scene John becomes sad. He holds the book in his left hand. Friendly he stretches it out to Mary, but she politely refuses." And again, the characters rehashed the scene.

The subtleties are the hardest part. "We do not reach for the telephone hoarder as for a dead rat," says Strassman. "I could increase my stock of hand movements, but the problem is: who or what manages it? Where is invented the bureaucracy that controls the selection procedures?"

Using what they had learned on the stick puppets and Lemonhead, Zeltzer and his colleague Michael McKenna helped the skeleton of a six-legged animat become the body of a shabby chrome cockroach and made the insect the star of one of the strangest computer animated films ever made. For the sake of form, the five-minute video with the jocular title "The grinning terror of death" also a storybook; the story was about how a giant metal wafer from space lands on earth and destroys a city. The story was boring to watch, but the star, the six-legged terror, was the first real animat – an artificial animal that moved independently.

When the giant chrome cockroach was crawling down the street, its behavior was "free". The programmers told her: "Run over these buildings!", and the virtual cockroach in the computer figured out how to move its feet and at what angle its body should be, and then it drew a believable video portrait of itself walking up and over five-story stone houses. The programmers directed rather than dictated its movements. When it came down from buildings, the giant robot cockroach was pulled to the ground by gravity. When she fell, the simulated gravity and friction caused her legs to realistically buckle and slide away. The cockroach played the scene without driving the directors to despair over the details of its movements.

The agents of ethological architecture

The next step towards the birth of a self-reliant virtual character is currently in the experimental stage. Take the bottom-up organized behavioral machinery of the giant cockroach and cover it with the adorable carcass of a dinosaur from Jurassic Park, to get a digital movie actor. Raise the actor, feed him well with computer cycles, and then feed him like a real actor. Give him general instructions – "Go foraging" -, and it will independently figure out how to coordinate the movements of its limbs in order to execute the command.

To realize this dream, of course, is not so easy. Locomotion is only part of the movement process. Simulated creatures must not only move, they must also be able to orient themselves, express feelings and react. To invent a creature that can do more than just walk, animators (and robot designers) need a way to acquire innate behavior in all its forms.

In 1940, a trio of legendary animal observers – Konrad Lorenz, Karl von Frisch and Niko Tinbergen – began to describe the logical underpinnings of animal behavior. Lorenz shared his house with Gansen, von Frisch lived among swarms of bees, and Tinbergen spent his days with sticklebacks and sea mow. With rigorous and intelligent experiments, the three behavioral scientists turned ancient animal myths into a respectable science, ethology. In 1973 they were jointly awarded the Nobel Prize for their pioneering work. Later, when cartoonists, engineers and computer scientists delved into ethological literature, they found, to their great surprise, a considerable framework already worked out by the three behaviorists, which could be transferred into the computer as it was.

The crucial point in the ethological architecture is the idea of decentralization. Tinbergen’s work The Study of Instinct (1951), the behavior of an animal is a decentralized coordination of mutually independent motions and movements. Drive centers that are combined with each other like behavioral building blocks. Some behavioral modules consist of a reflex; they invoke a simple function, such as pulling away when it is light, or blinking when touched. The reflex has no idea where it is and what else is going on, nor does it know the present goals of the body in which it is located. It can be triggered at any time when the appropriate stimulus occurs.

A male trout reacts instinctively to the following stimuli: a female trout ready to mate, a worm close by, a predator approaching from behind. But if all three stimuli occur at the same time, the predator module dominates and suppresses the freb and mating instincts. Sometimes, when there is a conflict between action modules or several stimuli at the same time, management modules are activated to make the decision. One is in the kitchen with smeared hands, and the telephone rings, while at the same time there is a knock on the door of the apartment. The opposing drives – to jump to the phone, to wipe one’s hands, to rush to the door – can lead to paralysis if they are not decided by a third module, the learned behavior, perhaps one that triggers the cry: "Please wait!"

Less passively, a Tinbergenian drive center also refreshes itself as a "unit of action", present as an agent. An action unit (whatever body form it takes) notices a stimulus and reacts to it. its reaction, or in computer language its "Output", can be amed as input for other modules, drive centers or action units. The output of a unit of action can possibly put other modules in readiness (cock the gun) or activate other modules that are already in readiness (pull the trigger). Or the signal can disable a neighboring module (secure the weapon). It’s tricky to rub your stomach and pat yourself on the head at the same time, because for some unknown reason, one action undercuts the other. In general, an output can both put centers on standby and turn others off. This is, of course, the creation of a network that cannot escape circular causality and is prepared to mouth in self-creation.

Thus, even from the thicket of these blind reflexes emerges extraordinary behavior. Because of the distributed origin of behavior, very simple lower-level actors can generate unexpectedly complex higher-level behavior. Within the cat, no central module decides whether it should scratch its ear or lick its paw. Instead, the cat’s behavior becomes more independent of a tangled web "Units of action for behavior" – cat reflexes – determined to activate each other and form a rough pattern (called licking or scratching) emerging from the dissected web.

This therefore sounds a lot like Brook’s subsumption architecture, because at its core it is nothing else. Animals are robots that work. The decentralized, distributed control by which animals are governed is also the way robots and digital creatures function.

Network diagrams of interconnected behavioral modules in behavioral science textbooks leave computer scientists with the impression of flub diagrams for computer logic. The message is: behavior is computerizable. By setting up a circuit of subunits of behavior, any kind of personhood can be programmed. It is theoretically feasible to create in a computer any mood, any sophisticated response that an animal is capable of. Movie characters will be driven by the same bottom-up control of behavior as Robbie the robot – and by exactly the same schema read from living songbirds and sticklebacks. But instead of making preblown air tubes exert prere or fish tails twitch, the dissected system rolls over bits of data that move a leg on the screen. In this way, autonomous animated characters in movies behave according to the same general rules of organization as do real animals. Although synthetic, their behavior is real behavior (or at least hyperreal behavior). Consequently, animated characters are simply robots without real bodies.

Fate and free will

But more than movement can be programmed. character – in the old-fashioned sense of the word – can be packed into computer bits. Depression, euphoria and rage outbursts will be the add-on modules for the creature’s operating system. Some software companies will sell better versions of fear than others. Perhaps they will "relational anxiety" fear that not only inscribes itself in the creature’s body, but seeps into successive modules of emotion and curses itself only gradually, over time.

Behavior wants to be unbound, but in order to be useful to humans, artificially generated behavior must be monitored and controlled. We want Robbie the Robot and Bugs Bunny to do things on their own without our supervision. At the same time, not everything that Robbie or Bugs could do is productive. How can we allow a robot, with or without a body or any form of artificial life, to determine its own behavior while we still control it to be useful to us??

Surprisingly, some answers to this question were found in a research project on interactive literature started at Carnegie Mellon University. The director of the project, Joseph Bates, built a world that he called "Oz" and which somewhat resembled the small room of John and Mary that Steve Strassman had created. In Oz, there are characters, a physical environment, and a plot – the same trio of components as in classical drama. In classical drama, the plot determines both the characters and the environment. In Oz, the control function is somewhat reversed; characters and environment influence the action.

Oz was created to bring joy to people’s lives. It is an imaginative virtual world populated with both automatons and human-controlled characters. The goal is to create an environment, a narrative structure and automatons in such a way that a human can participate in the story without destroying the flow of the story, but also without feeling disregarded as a mere observer within the audience. David Zeltzer, who supported the project with some ideas, has a wonderful example at hand: "If we were to provide you with a digital version of Moby Dick, there’s no reason why you shouldn’t have your own cabin on the Pequod. You could talk to Starbuck while he chases the white whale. There is enough space in the story to include you without changing the plot."

Oz involves three new frontiers of control research:

  1. How to organize an action that allows detours, but without losing sight of the intended goal?
  2. How to build an environment that is good for surprises?
  3. How to create creatures that have sufficient, but not ubermab autonomy?

From Strassmans "Desktop theater" we come to Joseph Bates "Computer drama". Bates imagines a drama with distributed control. A narrative becomes a kind of co-evolution, where perhaps only the outer contours are still given. One can move in the middle of a sequence of Star Trek and try to weave in other plot lines, or you could be on a trip with a synthetic Don Quixote and encounter new fantasies. Bates, who is primarily interested in the human user’s experience of Oz, fabulates his purpose in the following words: "The question I’m working on is: how do you impose a fate on users without taking away their freedom?"

For my search for the future of control, which is carried out from the perspective of the created rather than the victim, the question could be reformulated as follows: How to give a fate to a figure of artistic life without depriving it of its freedom??

Brad de Graf believes that this shift in control also shifts the authors’ intention. "We create a new medium. Instead of inventing a story, I invent a world. Instead of thinking up actions and dialogues for a character, I create a personhood."

When I had the opportunity to play around with some of the artificial characters Bates had developed, I got a sense of how much fun one could have with these pet-like creatures with their own personalities. Bates names his favorites "Woggles". Woggles comes in three versions: a blue woggle, a red woggle and a yellow woggle. The Klobe are expandable balls with two eyes. They bounce around in a simple world of stepping stones and a few hollows. Each color of the woggle represents a certain sequence of behaviors. One for shouting, one for aggressive, one for obedient. When a woggle scares another woggle, the aggressive woggle stretches to full length to scare off the attacker. The Schuchterne shrinks and flees.

Normally the woggles hop and do what woggles do among themselves. But when a human in the form of a cursor enters their world, they react to the visitor. Sometimes they chase him, get out of his way, or wait until he’s just out of the way before they charge another Woggle. One is on the stage but does not control the show.

I got a better glimpse of the future of controlling our darlings by prototyping a world that is, in some ways, a continuation of Bates’ Woggle world. A group of people working on virtual reality (VR) at Fujitsu labs in Japan grabbed Woggle-like characters and gave their bodies virtual three-dimensionality. I watched a guy doing a demonstration with a monstrous VR helmet on his head and data gloves on his hands.

He was in a fantasy world under water. From the distance shimmered the faint glow of a sunken castle in the background. The immediate play area was decked out with some Greek columns and chest-high seaweed. Three "Jellyfish" hooted around and a small shark drew its circles. The jellyfish, in the form of mushrooms and about the coarseness of dogs, changed color depending on mood or behavior. When they played with each other, they were all blue. They hooted unmercifully on their fat ramps. When the VR guy signaled them to come to him by waving his hand, they would jump excitedly at him, turning orange and hooting up and down like young dogs waiting for a stick to be thrown at them. If he paid attention to them, they closed their eyes contentedly. The guy could call the less friendly fish to him by touching them from a distance with a blue laser beam from his index finger. This changed the color of the fish and its interest in humans. He then drew ever tighter circles and swam very close – but never came too close, like a cat – as long as he was brushed now and then by the blue ray.

Even looking at the spectacle from the outside, it was clear that artistic figures with the weakest independent behavior and a three-dimensional shape in a three-dimensional space shared with others have a certain independent presence. I could imagine having an adventure with them. I could imagine her as a dinosaur; and get really scared. Even the Fujitsu guy winced once when the virtual fish swam too close to his head. "Virtual reality", says the count, "is interesting only when it is populated with interesting characters".

Pattie Maes, a researcher in artistic life at the MIT Media Lab, detests the virtual reality of glasses and gloves. She perceives such clothing as "too artificial" and constricting. She and her colleague Sandy Pentland devised another way to interact with virtual creatures. Their system, called ALIVE, allows a human to play with animated creatures via computer screen and video camera. The camera is pointed at the human player, thus inserting the observer into the virtual world he or she is following on the screen.

This fine trick gives a real feeling of intimacy. By moving my arms, I can create with small "Hamster" playing on the screen. The hamsters look like little toasters on wheels, but they are animats with independent goals and a rich repertoire of drives, perceptions and reactions. When they haven’t eaten in a while, hamsters roam their enclosed pen in search of food "Food". They seek the closeness of their fellows; sometimes they chase each other. They flee from my hand if I move them too fast. If I move them slowly, they try to follow curiously. A hamster can stand up and beg for food. When they get tired, they tip over and fall asleep. They are equidistant from robots and animated animals, and from them it is only a few steps to authentic virtual characters.

Pattie Maes is trying to teach her hops, "to do the right thing". She wants her creatures to learn from their own environmental experiences without intensive human supervision. The dinosaurs from Jurassic Park will not be real characters as long as they are not capable of learning. It will do little good to make a virtual human actor unless he or she can learn. According to the model of subsumption architecture, Maes structures algorithms into hierarchies that allow their creatures not only to adapt, but to work themselves up to ever more complex behaviors and – an essential part of the whole structure – to let their own goals emerge from these behaviors as well.

The animators at Disney and Pixar are nearly struck by the thought of this, but one day Mickey will follow his own plan.

The interplay of people and increasingly self-reliant artistic actors

THE YEAR IS 2001. It’s winter. In a corner of Disney Studios’ movie land, an appendix is set up as a secret research lab. Film reels with old Disney cartoons, stacks of gigabyte hard disks and three 24-year-old computer graphic artists are hidden in it. Within approximately three months, they deconstruct Mickey Mouse. As a potential 3D being, which appears in only two dimensions, it is again joined together. She can run, jump, dance, print out surprises or wave goodbye by herself. She can move her lips but not speak. The retread Micky pabs on a portable Syquest 2 gigabyte disk.

The plate is taken to the old animation studios – past rows of abandoned dusty cartoonists’ workstations it arrives in a separate room where Silicon Graphics’ workstations glow. Micky is put into a computer. Before that, the animators created an artistic world for Mickey that was furnished down to the last detail. He is placed on the stage, and the tape is turned on. Camera running! When Micky stumbles on the stairs of his house, gravity pulls him downward. The simulated physics of his rubbery rear end bouncing against the wooden steps creates realistic hoots. His mummy is blown away by a virtual wind blowing through the open front door, and when the carpet slips out from under him as he tries to run after his mummy, it undulates according to the physical laws of fabric, just as Mickey collapses under his own simulated weight. The only order Mickey received was to enter the room and not to lose his hat. The rest happened by itself.

Since 1997 Micky is no longer drawn. It is no longer necessary. Well, sometimes the animators assist the production and retouch a difficult facial expression here and there – for the trainers they are blob make-up artists – but basically Micky is handed a script which he obeys. And he – or one of his clones – works all year round, and on more than one film at a time. And he never complains, of course.

The graphic programmers are not satisfied. They attach one of Maes’ learning modules to Micky’s program. When it’s on, Mickey matures as an actor. He reacts to the feelings and actions of the other gross actors in his scenes – Donald Duck and Goofy. Every time a scene is repeated, he remembers how he played it in the last successful take and emphasizes this gesture the next time. He’s also evolving as an actor. The programmers soup up his code, give his movements an improved smoothness, expand the range of his expressive possibilities and mob the bottom of his soul life. He can now "Sentimental" play when it is asked.

But after more than five years of learning Micky starts to develop his own ideas. He’s kind of hostile to Donald and gets angry when he gets hit over the head with the wooden bat. And when he’s angry he gets stubborn. He strains when the director tells him to jump over the edge of a cliff because he has learned over the years to avoid obstacles and precipices. Mickey’s programmers complain that they can’t program around this stubbornness without destroying all the other finely tuned character traits and skills Mickey has acquired. "It is like a unified ecosystem", they say. "You can’t remove one thing without affecting all the others." A graphics programmer sums it up: "Actually it is like a psychology. The mouse has a proper persona. You can’t cut her up, you have to work around her."

And so Mickey Mouse in 2007 already has quite a lot of an actor’s. She is a "heibes Geschob", as the agents call it. She can talk. She can handle every imaginable slapstick situation. She does all her own stunts. She has a whole lot of humor and the fabulous timing of a comedian. The only problem is that it is a punishment to work with her. Unexpectedly she gets out of control and goes crazy. The directors hate her. But they put up with her – they’ve seen worse – because, well, because she’s Mickey.

And best of all, she’ll never die, never grow old.

The Disney Studios announced this emancipation of the cartoon characters in their own movie Roger Rabbit at. The characters in this film lived independent lives and have their own dreams, but they were not allowed to leave the cartoon city, their own virtual world, except when we needed them for our films. On the set, the characters can be friendly and cooperative or not. They have the same moods and tantrums as human actors. Roger Rabbit is pure fiction, but one day Disney will have to deal with an autonomous Roger Rabbit who is out of control.

Control – that’s what it’s all about. In his first film, Steamboat Willie, Micky was under total control of Walt Disney. Disney and the mouse were one. The more vivacious behavior Mickey is implanted with, the less at one he is with his victims and the more out of their control he becomes. This is old hat for anyone who has children or pets. But it is something completely new for someone who is in charge of a cartoon character or machines that are getting smarter and smarter. Of course, neither children nor pets are completely uncontrollable. There is the direct authority expressed in their obedience and the coarser indirect control we exercise in their education and training.

The best expression would be control as a spectrum. At one end there is the total dominance of the "one hundred percent" Control. At the other end "total loss of control". In between there are different forms of control, for which we have no names.

Until recently, all our artifacts, everything we had created by hand, were under our dominion. But since we cultivate synthetic life in our artifacts, we cultivate the loss of our command and control. "But control" is, to be honest, a gross exaggeration of the state in which our living machines will find themselves. Indirectly they will remain under our influence and guidance, only freed from our dominance.

Although I searched everywhere, I could not find anywhere the word that describes this kind of influence. We simply don’t have a word to express such a loose relationship between an influential scoop and a scoop with a will of its own – something we will see more of. For the parent-child relationship there should be such a word, but unfortunately it is not so. With sheep we get a little further, as the expression "hut" to understand gives. When we herd a flock of sheep, we know that we are not in full control, but we are not without control either. Perhaps we will appreciate artificial forms of life "herding".

Also "cherish" plants when we support them in achieving their natural goals, or redirect them a little for our own purposes. "Manage" is probably the meaning closest to the general form of control we need for artificial life, such as a virtual Mickey Mouse. The problems with a difficult child, with a barking dog or with the 300-employee sales department can be managed.

"Manage" is close, but also not perfect. Even though we manage pristine natural areas like the Everglades, we actually have little say in what goes on among the algae, snakes and marsh grasses. Although we manage the national economy, it does what it wants. And although we manage a telecommunication network, we have no insight into how a particular connection is established. The word "Management" implies more monitoring than we really have in the examples given above or will have in the very complex systems of the future.

The word I am looking for is rather "Co-control". What is meant by this is already being encountered today in a number of technical arenas. Flying or landing a jumbo jet in bad weather is a very complex task. Hundreds of simultaneously active systems, extremely short reaction time due to high airspeed, the disorienting effects of long-distance flights without sleep, and unstable weather all suggest that the computer is the better pilot. The sheer number of lives at stake leaves no room for error or second best solutions. Why not let the jet plane be controlled by a very intelligent machine??

Accordingly, the technicians tested an autopilot, and it turned out to be very capable. It flies and lands the jumbo jet ‘so beautifully’. Automatic flight also answers air traffic controllers’ prayer for more overview – everything under digital control. The original idea was that the flesh-and-blood pilots should monitor the computer in case anything went wrong. The only problem is that humans are very unfit to passively monitor anything. They start getting bored and daydreaming. Then they miss important details. And when it comes to an emergency, it catches them cold.

So they came up with the idea of reversing the relationship and letting the computer monitor the pilots. This approach was implemented in the European Airbus A 320, one of the most automated aircraft to date. Since its introduction in 1988, the on-board computer has monitored the pilot. When the pilot pulls the stick to change the direction of the plane, the computer calculates the maximum pitch to the left or right, but it will never let the plane tilt more than 67 degrees sideways or with the nose 30 degrees up or down. In the wording of Scientific American this means, "The software spins an electronic cocoon that prevents the aircraft from exceeding its structural limits". It also means, as pilots complain, that the pilot relinquishes control. In 1989, British Airways pilots experienced six incidents with 747 Boeings in which they had to override a computer-initiated power shutdown. If they had not been able to turn off the faulty autopilot – according to Boeing’s management it was a programming error – the mistake could have been fatal. The Airbus A320 nevertheless does not allow the pilots to make such a decision.

Human pilots felt they had to fight for their control over the plane. Should the computer be a pilot or a navigation system? Pilots joked that they could just as well have put a dog in the cockpit instead of a computer. The dog’s job is to stop the pilot when he tries to touch the fittings, and the pilot’s only job is to feed the dog. In fact, pilots in the technical jargon of automated flying will become "System managers".

I am pretty sure that the computer will end up in the co-pilot’s seat. A lot of things it will do completely outside the pilot’s control. But the pilot will manage or herd the behavior of the computer. And the two – machine and man – will be in permanent conflict, as all self-sustaining things are. Airplanes will be flown by means of co-control.

Peter Litwinowicz, a graphic designer at Apple, achieved a great software coup. He extracted body and facial movements of a live human actor and transferred them to digital actors. He loved to order a human actor a dry martini in a somewhat theatrical way. He used those facial expressions and gestures – the raised eyebrow, the grin around the corners of the mouth, the sweep of the head – to control the face of a cat. The cat brought the scene just as ruber as the actor did. For good measure, Litwinowicz put the actor’s facial expressions on a cartoon character, then on a rigid classical mask, and finally he brought a tree trunk to life with it.

Human actors will not be out of work. Some synthetic figures will act fully autonomously, but most will take the form of cyborgs. An actor will animate a cat, against which the artificial cat strains, helping the actor to become a better cat. An actor can be in the same kind of co-control on a cartoon "ride", with which a cowboy rides a horse or a pilot rides a computer-controlled airplane. The green body of a ninja turtle can whiz through the world on its own feet, but the human actor who takes part in the control provides here and there the appropriate subtleties of a laugh or puts the finishing touches to a growl, sneering at it as it dies away.

James Cameron, the director of Terminator II, he paid not so long ago to an audience of computer graphics specialists: "Actors love masks. They don’t mind sitting in the mask for eight hours to be made up. We need to make them partners in the creation of synthetic characters. They will get new bodies and new faces, which will give them the opportunity to expand their skills."

The future of control: partnership, co-control, cyborg control. All this means that the creator must share the functions of control and his destiny with his victims.

The chapter "The future of control" was slightly shortened the book "The end of control" by Kevin Kelly, which is published by Bollmann Verlag. The publication was made with the kind permission of the publisher.

Leave a Reply

Your email address will not be published. Required fields are marked *