Translator Disclaimer
1 September 2009 Guilty Robots, Happy Dogs: The Question of Alien Minds
Author Affiliations +

Any scientist who wants to investigate minds—our minds, animal minds, alien minds—will soon discover that there is no way to proceed without venturing into the playgrounds and battlefields of the philosophers. You can either stumble into this investigation and thrash about with a big scientific stick, thwacking yourself about as often as your opponents, or you can enter cautiously, methodically, trying to figure out the terrain using what you already know to interpret what you find. Fortunately, David McFarland has chosen the second option in Guilty Robots, Happy Dogs: The Question of Alien Minds, and there is much food for thought here for both scientists and philosophers.

It is written in the spirit of Valentino Braitenberg's brilliant little book Vehicles (1984), a series of thought experiments that led readers from robotic vehicles even simpler than bacteria to ever-more sophisticated and versatile agents capable of tracking food, avoiding harm, comparing situations, and remembering things. McFarland starts his project a little higher on the ladder of sophistication, with a robot designed to serve as a night watchman of sorts, identifying interlopers, calling for help when needed, and, most important, preserving its energy supply for another day, budgeting its activities to stay alive at all costs. This basic robot is then enhanced in various ways, in a design process whose ultimate goal is a robot that can be held accountable and to whom things matter—a robot with subjectivity and values.

How do nonhuman animals compare with such robots? Animal minds (including our own) are the real quarry here, and McFarland uses the parallels and differences between clearly imagined robots and various well-studied animals to illuminate the issues in a host of research controversies currently raging in psychology and ethology. This has been his larger strategy for many years, and this book gives us a summary of the lessons he has gleaned from this interdisciplinary exploration.

One message driven home most effectively, in my opinion, is that it is entirely appropriate to consider natural selection to be a (mindless, purposeless) designer, and to compare the designs churned up by eons of natural selection on a par with designs generated top-down by would-be intelligent designers—engineers and roboticists. Sometimes the perspective is particularly bracing, as when McFarland insists on situating his imagined robots in a market economy so he can note that nobody would buy such a robot—it wouldn't pay for itself. Animals, similarly, are amazingly thrifty because they have to be; they have superb layers of self-protection and repertoires of selfadvancing behaviors, but not a smidgen more than can pay for itself in the long run. This often brings out the rationale for animal (or robot) features that would otherwise be lost in the shadows. It also obliges McFarland to commit to a “behaviorist” approach—not the ideo-logical straitjacket of the Skinnerians but the behaviorism expressed by Turing in 1937, when he noted about the human computers of his day: “The behavior of the computer at any moment is determined by the symbols which he is observing, and his ‘state of mind’ at that moment” (p. 241). Handsome is as handsome does, a motto enshrined in the rationale for the Turing test, and the only way a science of mind can proceed.

But how much can one glean from inner behavior (the machinery of the mind, in effect) by observing the competencies of outer behavior? Do animals, for instance, really have the beliefs that their behavior seems to indicate they do? Here McFarland avails himself of a slightly unorthodox but useful interpretation of philosophy's terms of art, realism and functionalism. Realism would not impute a belief to the organism unless it was “in principle identifiable outside the role that it plays in the system” (p. 69), whereas functionalism (such as my intentional-stance view) is more relaxed, willing to impute beliefs that are only implicit in the design and functioning of the larger system. For realists, a belief is an explicit representation, “not simply part of a procedure. If a representation is to be explicit, then there has to be a physically identifiable bearer of the information (the token) and, additionally, something, most likely someone, who can be identified as the user of the information” (p. 77).

Human beings have beliefs aplenty, obviously, because they have lots of explicit knowledge that they can readily express. Do dogs or robots have explicit beliefs? Do they need them? McFarland shows how robots can exhibit behaviors similar to animals' behaviors without explicit representation, and he proposes to define cognitive processes as those that require “a certain kind of mechanism—one that requires manipulation of explicit representations” (p. 87). This sets the bar high and departs from standard usage, but perhaps it is best to follow his lead. Note that with this definition, it isn't clear that our hand-eye coordination or even our ability to find our way home counts as a cognitive process (unless we use a map or an explicit mental map).


McFarland also proposes a demanding definition of subjective experience: “The agent is the recipient of experience, and knows it” (p. 94). Using this definition, the behavior of turning to a more painless posture while asleep would not count as demonstrating subjective experience of pain, and it follows that much animal behavior (think of fish, for instance) is not clear evidence that animals have subjective experience, no matter how frantically they squirm. McFarland does not infer that animals don't have subjective experience or explicit beliefs. He just points out that given these well-motivated definitions, we cannot yet tell.

Indeed, that is the larger conclusion that McFarland draws again and again— the evidence is not yet in, not even about Border, his dog. He looks sympathetically at important experiments and observations, of dogs “teaching” their pups, of animals making sophisticated choices (are they explicitly maximizing their expected pleasure?). In each case he finds that a functionalist interpretation of the behavior seems to suffice: “Certainly we can say that the teacher behaves as if it wants, hopes, or desires the pupil to behave in a certain way,” he says, but he also goes on to note that the teacher could have a “strong theory of mind” about the pupil and be wrong (p. 105). The comparison with robots is always astringent here, and McFarland puts our built-in skepticism about robot minds to good use in reining in our romanticism about our furry friends.

McFarland proposes a contrast between two views of what is going on inside: the hedonic model and the automaton model. According to the hedonic, “the feelings of pleasure and displeasure that arise from various parts of the body in situations of motivational compromise are combined in some way, and behavioral adjustments are made so as to maximize pleasure and minimize displeasure.” By contrast, in the automaton, “the behavioral and physiological adjustments are automatic, and...the system is attuned to produce the best compromise among the competing demands” (p. 123). He says, “The fundamental difference between the two views is that in the automaton view the quantity maximized is implicit, while in the hedonic view it is explicit” (p. 123). But are these views as distinct as they first appear? When he turns to Michel Cabanac's experiments with people being paid to endure discomfort, and paying for sandwiches of different tastiness (by their own taste), he can rely, for once, on what subjects say about their decisions. As he goes on to note, however, a subliminal process can take the place of a “conscious motive,” apparently, and thus “it is not clear that Michel Cabanac is correct in assuming that trade-offs involving money necessarily involve a conscious mental component” (p. 128).

As we near the summit, we consider robot designers who want their robot to be “accountable for its behavior.” For this, it needs its own values, not just its designers' values. It can learn to adjust its values, but this learning must depend on some prior “immutable” values it was born with, you might say. Here I think McFarland misses a possibility: It might be unwise to design a robot that could eventually undo even its initial “immutable” values and take on a new summum bonum, but this is not an engineering impossibility (Suber 2001). Perhaps the only way to make an accountable robot that could deserve punishment for its misdeeds and rewards for its heroics would be to give it the dangerous capacity to renounce the values we installed in it at birth.

McFarland has done his homework well; he offers a patient, sympathetic, and largely accurate discussion of philosophers' best relevant work, plunging into the darkest thickets of controversy over supervenience, eliminativism, symbol grounding, higher-order thought theories, and the like. Some of his readings will jar the authors he discusses, who will think that they have been misunderstood to hold positions that had never occurred to them, but they will never find him sniping in standard philosophical fashion; if he misreads them, it is because his effort to find a constructive reading was too charitable by half. Philosophers are not always trying to do as much as scientists imagine.

References cited

  1. V. Braitenberg 1984. Vehicles: Experiments in Synthetic Psychology. MIT Press. Google Scholar

  2. P Suber. 2001. Saving Machines from Themselves: The Ethics of Deep Self-Modification. (22 July 2009: Scholar

  3. A. Turing 1937. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 42: 230–265. Erratum in Proceedings of the London Mathematical Society 43: 544–546 (1938). doi:10.1112/plms/s2-43.6.544 Google Scholar

Daniel C. Dennett "Guilty Robots, Happy Dogs: The Question of Alien Minds," BioScience 59(8), 707-709, (1 September 2009).
Published: 1 September 2009

Back to Top