Asking Computers What Our Ethics Are

In an essay on drone policy, Cyborgology is skeptical of our intuitive approach to ethics and empathy, for many of the same reasons as psychologics Paul Bloom.  In the Cyborgology piece, Robin James takes a critical look at why we prize ‘the human factor’ and feel unnerved by autonomous drones:

In this view, drones are problematic because they don’t possess the “human factor”; they make mistakes because they lack the crucial information provided by “empathy” or “gut feelings” or “common sense”–faculties that give them access to kinds of information that even the best AI (supposedly) can’t process, because it’s irreducible to codable propositions. This information is contained in affective, emotional, aesthetic, and other types of social norms. It’s not communicated in words or logical propositions (which is what computer code is, a type of logical proposition), but in extra-propositional terms. Philosophers call this sort of knowledge and information “implicit understanding.” It’s a type of understanding you can’t put into words or logically-systematized symbols (like math). Implicit knowledge includes all the things you learn by growing up in a specific culture, as a specific type of person (gendered, raced, dis/abled, etc.)…

Our “empathy” and “common sense” aren’t going to save us from making bad judgment calls–they in fact enable and facilitate erroneous judgments that reinforce hegemonic social norms and institutions, like white supremacy. Just think about stop-and-frisk, a policy that is widely known to be an excuse for racial profiling. Stop-and-frisk is a policy that allowed New York City police officers to search anyone who arose, to use the NYPD’s own term, “reasonable suspicion.” As the term “reasonable” indicates, the policy requires police officers to exercise their judgment–to rely on both explicitly and implicitly known information to decide if there are good reasons to think a person is “suspicious.”

…We make such bad calls when we rely on mainstream “common sense” because it is, to use philosopher Charles Mills’s term, an “epistemology of ignorance” (RC 18). Errors have been naturalized so that they seem correct, when, in fact, they aren’t. These “cognitive dysfunctions” seem correct because all the social cues we receive reinforce their validity; they are, as Mills puts it “psychologically and socially functional”

Of course, when we code the blunt if-thens that make up a drone’s or a police officer’s heuristics, our decisions may still be informed by an “epistemology of ignorance.”  But there’s something about writing down:

switch(race):

if(white): prob_search*= 0.25;

if(black): prob_search*= 1.2;

else;

That makes us flinch.  It’s much easier to do things we don’t quite approve of when we can keep them shielded behind an ugh field, so their details are obscured.  Whether or not we want to have autonomous drones, codifying the rules they’d operate under forces us to acknowledge the norms we currently follow.  And it gives us the opportunity to change, once we notice we feel uncomfortable.

And, as machine learning advances, we may not need to write the rules down ourselves.  It’s possible to computers to approximate “quintessentially human judgement” with high fidelity, as long as we give them enough data.

Imagine there was one notably trustworthy insurance claims adjuster named Alice, and we wanted to write a program that could ape her instinctive judgement.  Given enough sets of inputs along with the real Alice’s decisions, it would be possible.  The computer program might not come to its answers quite the same way (it’s model would probably be laden with epicycles and other errors of modelling), but it might be much closer to Alice’s judgement than any of the employees she trained.

And we could look at the code of the Alice-program to get a sense of what she might be weighting in her decisions.  Maybe we’d see:

if(photo_included = TRUE)

compensation+=2000;

else;

And we might decide to excise that line and talk to Alice about stripping the photos out of the applications she processed.

Using machine learning to approximate our own decision making is a way of examining revealed preference, ethical or otherwise.  I’m sure if I could look at a Leah-program that was adept at imitating the way I treat people, I’d be appalled by some of the rules of thumb that it was using.  It’s easier to delete the offending code from a program than from my heart and my habits, but the act of formalizing my choices helps bring these errors to my attention, so I can act.  As learning algorithms get better, I hope we make the use of this opportunity of introspection into our own behaviors and look for ways to patch our moral bugs.

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • Martha O’Keeffe

    I’m not entirely sure what that essay is addressing. Is it saying “Yeah, blowing up cars full of people is not great, even if it’s done by fancy new military tech” or is it saying “Soon as we get even fancier new tech, we can be assured we’re blowing up the right cars full of people!”

    I would be inclined to think, even if the example did come from a “cheese-fest” television programme, that a human might be better able to make the distinction between “these are people shooting off fireworks in celebration” from “these are armed militants shooting off rockets” than a drone, no matter how autonomous.
    I agree that we humans make terrible decisions and use all kind of fudges to rationalise them and explain away bad results and soothe unease, but I somehow don’t think that the decisions will be any better when they come about from a process written down in code for a machine – yes, it would be nice to think we’ll look at how we decide group X is a legitimate target and think “Hang on, this is not right!” but I am more afraid that we’d conclude “This is not right – we’re not hitting enough of them!” rather than “This is not right – we shouldn’t be doing this at all!”

  • Martha O’Keeffe

    Let me put the question, seeing as how today is the Fourth of July: how many of you think that a drone could differentiate between all the fireworks being shot off in displays and by private groups as harmless versus ‘gunpowder, rocketry, looks like anti-aircraft flak’? How many patriotic celebrations would be called in for an artillery strike if drones were flying over America tonight?

    • Randy Gritter

      Depends on the program. Just like Feb 29th or y2k. If your code is built to handle it then it likely will. Your point is just that the code would be complex. Nobody says it would not be.

      • Martha O’Keeffe

        My point is it would end up “Did you tick the boxes? Did you follow the protocol? Then don’t worry (or don’t rock the boat) about it”.
        Computers are only as smart as we are, and unless and until we get ones that really do think for themselves, they will only follow what we tell them to follow. Anyone who has ever got a utility bill demanding payment of €0.00 or else legal proceedings will be instituted will realise that ‘let the government write this programme by putting it out to tender and we’ll blow people up on the basis of what it tells us to do’ is not something that will fill you with reassurance that Nothing Can Possibly Go Wrong because we had our Top Men write this thing, Top. Men.

  • Randy Gritter

    I do think we turn human soldiers into drones. Every army in the world prepares its men for fighting in almost the same way. Build a strong identity as a soldier above all else. Uniforms and haircuts and such help. Then break down any resistance to following orders. Deliberately give them nonsensical orders to follow. Build a mindset of just do and do not question. So we try very hard to break down the empathy or common sense that humans come with. They are actually able to carry out atrocities without many problems.

    As far as the race thing goes, I think it would be helpful. Race can be used in so far as it is the best predictor we have of some things related to crime and terrorism. The trouble is race as a statistical predictor is hard to separate from racism in real humans. You make a judgement and you don’t know. Is it because experience has caused you to see a real risk in this person that corresponds with the data? Or are you just being a bigot? A program could separate these and only use racial data in an appropriate way.

    • Martha O’Keeffe

      Where human judgement comes in is when you can recognise that the real troublemaker isn’t the angry kid yelling and throwing chairs, it’s the quiet, sneaky one sitting in the corner watching the chaos that deliberately wound the angry kid up until he blew.

      I’d be worried that racial bias would still exist, except that it would be officially endorsed because “Oh no, police/social workers/claims adjusters don’t make decisions anymore, we have a specially designed programme that strips all the subjectivity out so it’s SCIENCE!!!”

      I mean, it’s not like SCIENCE!!!! was ever used before to prop up theories about ‘group X are inferior and we can prove it’, is it?

      I’m going to lob a Chesterton quotation at you all now, from “The Secret of Father Brown”:

      Father Brown snapped his fingers with the same animated annoyance. “That’s it,” he cried; “that’s just where we part company. Science
      is a grand thing when you can get it; in its real sense one of the grandest
      words in the world. But what do these men mean, nine times out of ten, when they use it nowadays? When they say detection is a science? When they say criminology is a science? They mean getting outside a man and studying him as if he were a gigantic insect: in what they would call a dry impartial light, in what I should call a dead and dehumanized light. They mean getting a long way off him, as if he were a distant prehistoric monster; staring at the shape of his ‘criminal skull’ as if it were a sort of eerie growth, like the horn on a rhinoceros’s nose. When the scientist talks about a type, he never means himself, but always his neighbour; probably his poorer neighbour. I don’t deny the dry light may sometimes do good; though in one sense it’s the very reverse of science. So far from being knowledge, it’s actually suppression of what we know. It’s treating a friend as a stranger, and pretending that something familiar is really remote and mysterious.

      …After an instant’s silence he resumed: “It’s so real a religious exercise that I’d rather not have said anything about it. But I simply couldn’t have you going off and telling all your countrymen that I had a secret magic connected with Thought-Forms, could I? I’ve put it badly, but it’s true. No man’s really any good till he knows how bad he is, or might be; till he’s realized exactly how much right he has to all this snobbery, and sneering, and talking about ‘criminals,’ as if they were apes in a forest ten thousand miles away; till he’s got rid of all the dirty self-deception of talking about low types and deficient skulls; till he’s squeezed out of his soul the last drop of the oil of the Pharisees; till his only hope is somehow or other to have captured one criminal, and kept him safe and sane under his own hat.”

    • http://thinkinggrounds.blogspot.com/ Christian H

      “Race can be used in so far as it is the best predictor we have of some things related to crime and terrorism. The trouble is race as a statistical predictor is hard to separate from racism in real humans.”
      Not quite. The trouble is that it is racism full stop.

      • Randy Gritter

        If you define racism that way then you it is not always a bad thing. If a bank has been robbed and the thief is fleeing the scene. The bank employees describe him as a black male. Are the police being racist when looking for a black male? Under some definitions of racism they are. The trouble is there is nothing wrong with the police looking for a black male in that situation. You can call it racist. You can call it sexist. It is simply rational. If they see a Chinese female they can be pretty sure it is not the person they are looking for.

        So there needs to be a distinction between race used as real identifying data and race-based decisions based on negative stereo-types and xenophobia. The trouble is that many decisions are made in the subconscious mind. Asking police to distinguish one from the other is hard. Asking supervisors to judge the subconscious choices of their officers is even harder. In theory a decision-maker based solely on data and not on any cultural or psychological factors might be useful.

        • http://thinkinggrounds.blogspot.com/ Christian H

          Hmmm. I will need to think about that. I would never define racism in such a way that it could ever be anything but bad, and my first response is still that racial profiling is racist but identifying a particular suspect based in part (but only in part) on race is not racist. I think it has something to do with the differences between the two examples, namely that in the example you give 1. you would use identifiers other than race to identify the suspect as well, 2. you are looking for a particular person in the robbery case, 3. you don’t assume before the robbery that the person isn’t white, and 4. you know that a crime has been committed in the first place. But I’ll need to think about it more.

          • Scott Hebert

            What I have just found out is that the fact ‘racial profiling is racist’ only when the profiling is done for bad ends? E.g., racial profiling is done ALL THE TIME in healthcare to discriminate ‘positively’ when doing interventions. So, yeah… I have very little issue with racial profiling when it has a statistical basis in fact, whether or not it’s for a ‘bad’ end.

  • Y. A. Warren

    I LOVE THIS! So much “meat” to chew on. It kind of explains why religions and other social structures have rules, as well as The Sacred Spirit in each of us, informing our free wills.

  • Rai

    Very interesting! Indeed, the developement of new tecnologies forces us to come to terms with the gray zones in our moral codes. However, I don’t think the most interesting issue will be like the one you describe with your Alice-program. That is more of a statistical analysis problem, which we can do, with less efficiency, right now.
    The true problem will arise when somebody will try to create the behavioural program starting “from the ground up”, from a metaphysic and a moral code, instead of simply letting the machine ape a human. It will be an interesting experiment, to see what an embodiement of a philosophy will do. We are, after all, made of weak flesh; we often negotiate with ourselves about morality- a machine will instead apply the moral code perfectly selflessy.
    We would be able to see the extreme conclusion of any philosophy, and by tweaking the program with built-in flaws see also if it’s applicable in human society. We could see, in short, the exact causes of the failure of a philosophy or moral code.
    I have seen in one of your posts that you have “The fable of the bees” in your library, so I feel confident that you might be interested in this prospective.

    • Roki

      The question is: how does one program a moral code, much less a metaphysic?

      Computer code is, as Leah points out, a form of logic. It is a pure logic, in that there is nothing there except the logic itself. It does not actually make decisions; rather it follows the logical instructions it has received. When presented with anything beyond its instructions, it returns an error.

      The best one could do at programming a moral code could only be a kind of positivist casuistry. Meanwhile, the GIGO principle has not gone away; so, even if the code were perfectly structured, the best it would be still would depend on the judgments of the programmer. It would not be “an embodiement of a philosophy”; it would only be an expression of the programmer’s (or the programming team’s) choices about the algorithm.

      This even applies to so-called “learning” programs: the base learning algorithm determines what the program looks at and what data it analyzes and how it alters its output algorithm to develop new outputs.

      All that said, Leah is exactly right that an observational or learning computer can be an excellent and revelatory mirror to our own behavior. Just as public speakers or performers record practice sessions to see and hear objectively what they are doing, to discover problems that they aren’t aware of during the performance; so all of us can use such tools to see objectively the patterns in the actions we take without being aware of them.

      What we do about those patterns is where our metaphysics and morality truly are embodied.

      • Rai

        Thank you for the answer. To clarify: I don’t think that this is something around the corner. Now to your objections:
        1) “It would not be “an embodiement of a philosophy”; it would only be an
        expression of the programmer’s (or the programming team’s) choices about
        the algorithm.”
        So it would be a practical testing of their choices, which are informed by their philosophy and moral code. If the program crashes, it would be possible to see if it ran into a genuine logical contradiction of the philosophy or an unforeseen situation.
        2) When you make decisions, you try to follow the logical consequences of your personal beliefs. I don’t see how this couldn’t be simulated by a machine. (Since this statement can be misunderstood pretty easily, what I mean isn’t that the result will be an AI. I mean that the logical working of a rational mind can be simulated not only when the mind is doing, for example, calculations, but even when it’s discussing the consequences of a very precisely worded philosophy.)
        Hope this tirade makes sense.

        • Roki

          I think I disagree in principle regarding the capacity of machines for simulating thought. You say:

          I mean that the logical working of a rational mind can be simulated not
          only when the mind is doing, for example, calculations, but even when
          it’s discussing the consequences of a very precisely worded philosophy.

          I don’t see how it’s possible for philosophy to be discussed without the mental act called “abstraction” (by Aristotelians, at least), that is, the ability to perceive the universal essences of things, and to make connections between them, and to apply these abstract understandings to new empirically perceived encounters.

          But this seems to be an entirely different kind of logic than a machine is capable of. Indeed, a machine is not really “doing” logic, but rather is a tool for making calculations or working through an algorithm.

          So a machine could, in principle, mimic the behavior of a rational person, and show the statistical values of different behaviors that it has been programmed to process. And this could be very useful for a person to see and understand.

          But a machine, so far as I understand its workings, is incapable of mimicing the “logical working of a rational mind,” because the logic of a rational mind involves abstraction, as well as other acts such as imagination and evaluation, which are different in kind from the algorithmic processing of a machine.

          • Rai

            Why shouldn’t be able, at least in theory, to abstract? Or, failing that, to simulate the mental act of abstraction?
            Anyway, the machine can’t simulate the entirety of the mind, but coming to the logical conclusions of a set of rules is something within the realm of possibilities. I used the wrong word there- “discussing”- the machine isn’t discussing anything, it’s simply plotting a course of action starting from the given philosophy. The programmer will see if the course of action is really what he wants. Really stupid example: the moral code lacks a command against murder. After some simulating, we discover that the robot would kill people to get groceries without having to wait in line. So we say: it is really a good idea to use that kind of moral code?
            The don’t kill example is really straightforward- but applied to more complex issues, it could give us some truly useful information.

          • Roki

            The act of abstraction is one of the main reasons that philosophers in the Aristotelian tradition consider the mind to be immaterial: because we are able to conceive of essences or natures of things apart from their actual material existence.

            So, for example, we are able to form an idea of “tree” which is not any actually existing tree, but which applies to every actually existing tree, and which enables us to recognize a new kind of tree that we’ve never seen before. These ideas are usually called “universals” because they express what is universally present in a kind of thing.

            A machine is, at its most basic level, a set of switches which are arranged in such a way that the switches are changed or not according to a set of instructions. The instructions are usually binary in our current technology, but must always be quantifiable. Therefore, the action of the machine – and the limit of its action – is what can be reduced to a quantifiable instruction. If it can’t be turned into a number, then a computer cannot do it.

            So, how do your turn “do not kill” into a number? Perhaps that’s possible, by quantifying various signs of life: heart rate, neuron discharge, whatever. But how do you turn “do not harm” into a number? How do you quantify “threat”? How do you measure “self-defense”? How do you distinguish, in terms of pure number, two friends rough-housing from two enemies fighting?

            This is why I am highly skeptical of even a simulation of abstraction – to say nothing of a machine actually being capable of abstraction.

            Now, I agree that mimicry and simulation can give us lots of useful information. But it is not the same thing as making a choice, or even simulating the making of a choice.

          • Scott Hebert

            Roki, abstraction is actually a core concept of object-oriented programming. In Java terms, a Class is the abstraction of an Object. Indeed, it is the program’s archetype to create objects. (I apologize for the wall of text, but my CR isn’t working.) Every Object is an instantiation of its archetype Class. And Programming languages can work not only with the Objects of Classes, but the Classes themselves.

    • Martha O’Keeffe

      I have great faith in the human capacity for self-deception, so the idea of noting the flaws in our moral philosophy when we try to code it for drones is not very convincing to me.
      What are the purposes of drones? In this one instance (I’m avoiding the police drones already used by some U.S. police forces) we are talking about “Kill our enemies”. Now, the stated policy may be “Only kill our enemies” or “Improve our surveillance” or “Protect our troops” or “Narrow down the list of targets as much as possible so we only hit the right ones” or a combination of excuses that boil down to “We’re the Good Guys here”, but what the drones really do is “Kill our enemies”.
      How do the drones know who “our enemies” are? We tell them that “These people. And their sons. And their neighbours. And anyone else in the vicinity when you blow up. Basically, kill the ones we tell you to kill and anyone else killed we will retrofit to be a legitimate target”.
      I’d like to think that writing the code would make us go “Hang on, we’re judging people to be enemies on a completely faulty basis!” but I think the code writing will instead concentrate on “Kill more better faster cheaper”. If it turns out we’re killing on the basis of “Shade of brown in skin/language spoken/religious affiliation/living in this district”, we’ll find some fudge to validate that as “No, we have good reasons to think these are indications of guilt; after all, if you don’t want to be killed in a drone strike, don’t live in that area”.

      • Rai

        Well, after all it will always be up to us to correct our flaws- no machine can do that for us. Sometimes the flaws are a kind of elephant in the room that everyone ignore for the sake of quiet living. Sometimes nobody honestly saw the flaws before, and they will be shown during the programming.
        The machine will simply obey it’s orders. But as it gets more autonomous, the biggest and more ingrained flaws will surface from its behaviour.

  • TheodoreSeeber

    Due to the autism, all of my ethics is if then else trees- I also don’t trust probability enough to let it have a play.

    And reading minds is right out.

    That’s why I tried to redefine rape as a decision tree *interior to the mind of the rapist* including noticing when one strays into dangerous territory (with the first thought of lust).

  • http://last-conformer.net/ Gilbert

    Just landed on an error page and noticed:

    This here post is #1000 on the blog. Too late for you to make this one a victory dance, but if you know how many are guest posts you could soon celebrate your personal #1000.

  • http://thinkinggrounds.blogspot.com/ Christian H

    I think the reason people would prefer humans to drones–intuitive reason, and I recognize that I’m speculating about their intuitions–is that we tend to infrahumanize people at a distance. When we program drones, we’re thinking about people at a distance, so the programming that goes into drones is one that treats humans as non-humans. Up close the other’s humanity confronts us; perhaps this makes us squeamish, less likely to do the thing we know we ought to do, but at least intuitively it seems more likely that we’ll try to find another, optimal solution that we just wouldn’t bother to try to find from a distance (perhaps because it might inconvenience us). Maybe tonight I plan to fire one of my employees because he’s incompetent, but when I meet him tomorrow, I will instead give him a second chance in another department.
    The problem with this, as Martha, Randy, and the police example point out, seems to be that we are perfectly capable of infrahumanizing people up close, too. But, still, if someone is willing to infrahumanize when they’re immediately encountering their enemies, I do not think there’s much chance that they’ll (we’ll) do anything else when programming drones. Maybe some of us might flinch to see it explicitly coded, as Leah argues, but not enough of us to make a difference. (But I’m speculating. I’d like to see experiments to that effect, of course, but based on already-existing experiments, I’m not optimistic.)

  • jason taylor

    That disadvantage would be justly applied to any weapon. Arrows aren’t human either.

    The idea of peculiar weapons being immoral, in a manner that is separate from how they are used, is simply the aristocratic idea that war is a sport and must be carried out “fairly”.


CLOSE | X

HIDE | X