Asking Computers What Our Ethics Are

Asking Computers What Our Ethics Are

In an essay on drone policy, Cyborgology is skeptical of our intuitive approach to ethics and empathy, for many of the same reasons as psychologics Paul Bloom.  In the Cyborgology piece, Robin James takes a critical look at why we prize ‘the human factor’ and feel unnerved by autonomous drones:

In this view, drones are problematic because they don’t possess the “human factor”; they make mistakes because they lack the crucial information provided by “empathy” or “gut feelings” or “common sense”–faculties that give them access to kinds of information that even the best AI (supposedly) can’t process, because it’s irreducible to codable propositions. This information is contained in affective, emotional, aesthetic, and other types of social norms. It’s not communicated in words or logical propositions (which is what computer code is, a type of logical proposition), but in extra-propositional terms. Philosophers call this sort of knowledge and information “implicit understanding.” It’s a type of understanding you can’t put into words or logically-systematized symbols (like math). Implicit knowledge includes all the things you learn by growing up in a specific culture, as a specific type of person (gendered, raced, dis/abled, etc.)…

Our “empathy” and “common sense” aren’t going to save us from making bad judgment calls–they in fact enable and facilitate erroneous judgments that reinforce hegemonic social norms and institutions, like white supremacy. Just think about stop-and-frisk, a policy that is widely known to be an excuse for racial profiling. Stop-and-frisk is a policy that allowed New York City police officers to search anyone who arose, to use the NYPD’s own term, “reasonable suspicion.” As the term “reasonable” indicates, the policy requires police officers to exercise their judgment–to rely on both explicitly and implicitly known information to decide if there are good reasons to think a person is “suspicious.”

…We make such bad calls when we rely on mainstream “common sense” because it is, to use philosopher Charles Mills’s term, an “epistemology of ignorance” (RC 18). Errors have been naturalized so that they seem correct, when, in fact, they aren’t. These “cognitive dysfunctions” seem correct because all the social cues we receive reinforce their validity; they are, as Mills puts it “psychologically and socially functional”

Of course, when we code the blunt if-thens that make up a drone’s or a police officer’s heuristics, our decisions may still be informed by an “epistemology of ignorance.”  But there’s something about writing down:

switch(race):

if(white): prob_search*= 0.25;

if(black): prob_search*= 1.2;

else;

That makes us flinch.  It’s much easier to do things we don’t quite approve of when we can keep them shielded behind an ugh field, so their details are obscured.  Whether or not we want to have autonomous drones, codifying the rules they’d operate under forces us to acknowledge the norms we currently follow.  And it gives us the opportunity to change, once we notice we feel uncomfortable.

And, as machine learning advances, we may not need to write the rules down ourselves.  It’s possible to computers to approximate “quintessentially human judgement” with high fidelity, as long as we give them enough data.

Imagine there was one notably trustworthy insurance claims adjuster named Alice, and we wanted to write a program that could ape her instinctive judgement.  Given enough sets of inputs along with the real Alice’s decisions, it would be possible.  The computer program might not come to its answers quite the same way (it’s model would probably be laden with epicycles and other errors of modelling), but it might be much closer to Alice’s judgement than any of the employees she trained.

And we could look at the code of the Alice-program to get a sense of what she might be weighting in her decisions.  Maybe we’d see:

if(photo_included = TRUE)

compensation+=2000;

else;

And we might decide to excise that line and talk to Alice about stripping the photos out of the applications she processed.

Using machine learning to approximate our own decision making is a way of examining revealed preference, ethical or otherwise.  I’m sure if I could look at a Leah-program that was adept at imitating the way I treat people, I’d be appalled by some of the rules of thumb that it was using.  It’s easier to delete the offending code from a program than from my heart and my habits, but the act of formalizing my choices helps bring these errors to my attention, so I can act.  As learning algorithms get better, I hope we make the use of this opportunity of introspection into our own behaviors and look for ways to patch our moral bugs.


Browse Our Archives