Hic Sunt Dracones

On the one hand, the maker of this globe in 1504 was woefully ignorant about the world.

It looks like South America was torn off roughly at the equator, and where the North American continent ought to be there are only a few scattered islands. The coasts and boundaries of Europe, Africa and Asia are, at best, crude approximations.

But on the other hand, this may be the oldest globe to depict the Western Hemisphere. While it’s far from perfect, it represents a huge leap forward in human knowledge and understanding.

Whoever made this globe knew things that people a generation earlier did not know. Given when this globe was made, and given the extremely limited resources available to whoever made it, it’s a remarkable achievement.

This is how science and learning and human progress works.* The leading edge of learning one day is bound to appear vague, partial and inadequate 500 years later.

We can make maps today that are clearer, more accurate and more complete than this old globe, but we do so mindful that 500 years from now, our best efforts may appear as sketchy as this beautifully carved artifact from 1504. We go about our learning mindful that our best knowledge may, someday, be improved upon in ways we cannot imagine by people with resources we cannot imagine that will enable them to see further and clearer and deeper than we are able to see now.

That’s both humbling and inspiring. Let’s do our best, like this anonymous globe-maker did, to impress our heirs 500 years from now. They will surely notice whole continents of knowledge missing from our best efforts, but let’s try to dazzle them with our care and craftsmanship and our painstaking effort to put together the best of what we know, pointing the way to something better.

- – - – - – - – - – - -

* The sentiment in this post is all quite welcome — even if it’s a bit overly sentimental — as long as the subject is cartography, or astronomy, or physics, or medicine or other fields where dramatic progress over the centuries is obvious and celebrated. But it’s mostly unwelcome if the subject is theology.

This little cartographical object lesson contradicts the mythological narrative we tend to construct around the study of theology, which regards the past as a Golden Age of perfect, complete, wholly accurate knowledge that has gradually been lost over the ensuing centuries. It’s a myth of regress just as powerful and potentially misleading as the myth of inevitable progress I flirt with in the first seven paragraphs of this post.

But it has the opposite effects. The ideal of progress is both humbling and inspiring — reminding us that our best knowledge is always incomplete and pushing us toward further inquiry and perpetual curiosity. But the ideal of regress warns against inquiry and curiosity as dangerous activities that threaten the precious remnants of the truths we have inherited. And rather than encouraging humility, it encourages arrogance — telling us that we may be worse than our ancestors, but we’ll always be better than our descendants.

  • BaseDeltaZero

    Everything breaks down with enough hypothetical scenarios, though. I think what we’ve determined here is that pure, ‘naive’ utilitarianism is flawed, but that doesn’t mean partial utilitarianism is worthless as a moral metric.

  • http://blog.carlsensei.com/ Carl

    Agreed. Everyone should learn utilitarianism as their first moral framework, but I don’t think it’s the final, metaphysically correct one.

  • http://anonsam.wordpress.com/ AnonymousSam

    I’m done with this thread because I keep being told that I don’t “get” utilitarianism without actually getting replies to my objections, just explanations for why those situations empirically won’t happen.

    Thank you for letting me know that I wasted my time replying to you. I’ll be sure not to make that mistake in the future.

  • Kenneth Raymond

    Sam answered your objection about why compulsory secret organ harvesting is a losing game in the utilitarian framework, and your response there was to toss off something about a side point about deontology without actually acknowledging his point. And then complain that nobody else is answering your objections.

    Frankly, the idea of insulting someone who argues in such simple bad faith makes me kind of happy. And given you disregard the production of suffering in your view of utilitarian ethical calculus, I guess my happiness is justified and thus utility is served.

  • Carstonio

    What does “metaphysically correct” mean?

  • Kenneth Raymond

    Okay. Here’s a more serious take on it, then. I doubt it’ll be worth the time I’m putting into it but frankly I’m probably a little too conscious of people’s opinions of me and I prefer the community overall to get the impression I actually think through my positions and am not just objecting to your objections because “simple” or whatever.

    I am a sociologist by inclination and education. Hypotheticals are all well and good but, truly, they aren’t meant to be realistic and I am concerned with actual use and application of ideas. And in actual use, while hypotheticals are useful for finding the breaking point of an idea, they’re also frequently used to dismiss something that could be “better” because it isn’t “perfect.”

    The “welfare queen,” for example. The hypothetical dishonest individual cheating the system – issued as an excuse not to improve the system at all. It is itself a dishonest engagement with the problem (people suffering from poverty) by using the isolated edge cases (life devotion to gaming the system) as a rhetorical device to devalue the actual benefits gained.

    In other words, I don’t consider hypotheticals as a substantial argument in and of themselves because they’re used extensively against the pursuit of real solutions. Your hard insistence on hypotheticals carries this flavor to me. Is it a failing of my personal worldview? Maybe, but damn if US political rhetoric doesn’t prop up my confirmation bias pretty strongly.

    So. This brings me to the “utility monster,” the foundation of several of your hypotheticals. Be it Felix in the SMBC comic, or the aliens who get off on human suffering, I disagree that those hypotheticals really do provide a test case for the extreme of utilitarianism because I think they involve: 1) a short-sighted inability to think their own premises through; 2) devotion to false limitations of choice by the actors involved; and 3) a failure to understand that not all happiness is of equal utility.

    Regarding point 1: carry the hypothetical further. Felix dies. Overpopulation sends the alien race into an ecological crash and extinction – or their sun goes nova, or the systems they use to observe human suffering break down. Whatever. In any case, something happens to remove the utility monster from the equation – because it will. Nothing is eternal, especially not your hypothetical utility monster (unless it’s God, but now we’re getting into the L&J version of God and that’s been discussed nigh unto death elsewhere on this site). What then? Felix dies and the entire world devoted to his happiness is completely without any happiness. The aliens are somehow removed from the equation and human suffering is going unenjoyed. The only ways a utility monster retains utility in the absolute extreme are if they render humanity extinct with them and are deliriously happy as they go, or if they were made so happy by others’ suffering that all human suffering in perpetuity is somehow retroactively contributing to their happiness even once they’re removed.

    Have we gone far enough into absurdity yet that the hypothetical breaks down and we can declare it irrelevant to the actual concept of utility? Yes? No?

    Okay. Point 2: false limitation of choices. The only choices aren’t restricted to “suffer for the utility monster” or “cause negative utility by refusing to suffer.” How are the aliens getting news of our suffering? Our transmissions via TV and radio? Ships hiding in orbit watching us and transmitting home (probably not this – shipboard duty is less fun for those handful of monsters than staying home and letting an automatic system handle it for them)? Doesn’t it generate greater utility, then, if we find a way to fake the signal output and let the aliens think we’re undergoing abject suffering while we’re actually making things work better every day here on Earth? They’re happy, we’re happy, so hey! Greater utility. Better choice all around. Turns out it wasn’t a silly binary after all!

    Actually, in the case of a single-person utility monster like Felix, wouldn’t it also be of greater utility to stick a wire in his brain and constantly stimulate his pleasure centers so he thinks he’s caught up in a world of infinite rapture, and let the rest of the world seek happiness without suffering for the monster? It’s a much more efficient way of dealing with an individual utility monster, it seems. And it’s not like his suffering from being strapped down, intravenously fed, and perpetually electro-shocked into joy is so great that it mandates everyone else suffer in his place instead.

    So that concludes Point 2, or, “Why the Utility Monster Doesn’t Deserve Truth.”

    And finally, point 3: not all happiness is of equal utility. There have been some studies in the news in the past few years that indicate that happiness has diminishing returns relative to the devotion of resources to its maintenance and improvement. It may not be perfectly asymptotic but it’s a useful analogy to understand – when your happiness is low (say at 10 units), it may take only an extra $2 per hour to increase your happiness by 5 units, but when your happiness is high (at 50 units) it’ll take, well, a lot more than $2 per hour extra to nudge it up to 55 units.

    Assuming we can assign “happiness units” a direct dollar value, anyway. Y’know. Hypothetically speaking.

    In other words, there’s far greater utility gained by improving a lot of people’s conditions a little than by improving a few people’s conditions a lot. You get more happiness value per dollar spent on the poor than the rich. Conversely, causing a little suffering to a lot of people who are already suffering decreases utility a lot more than is increased by making a happy person happier.

    Not even your hypothetical vastly-more-populous alien race can game this calculus sufficiently. As the numbers become larger, the differences become more granular on the upper end of the scale. Their happiness – even their collective happiness – increases in smaller and smaller amounts even as human happiness decreases in larger and larger amounts. That suggests there’s still a high minimum threshold of human suffering where the aliens still get their utility’s worth out of our suffering… but I already pointed out that we can just lie to them about it anyway without meaningfully decreasing their happiness, so screw them.

  • malpollyon

    A lot of people have told me that I am making a strawman argument or that the various criticisms of utilitarianism don’t apply, but no one has explained why the criticisms don’t apply.

    This is not because your criticisms are so incisive that we are unable to respond, but because they are so unrelated to what actual utilitarianism as advocated by actual utilitarians proposes as to be not worth responding to. If you can’t be bothered doing even the most basic work of finding out what your opponents actually believe then they have no obligation to spend several hours educating you.

  • Hexep

    According to what I’m researching, different versions have the snake chapter, or the ant chapter, but not both.

  • http://shiftercat.livejournal.com/ ShifterCat

    As well as the problems other people have pointed out here (most obviously, that you completely leave out the factor of “not causing suffering”), you’re also making the mistake of confusing happiness with pleasure.

    Happiness, I think it’s safe to say, involves a feeling of fulfillment and belonging, of being comfortable in your own skin. It’s a lot more complicated than just pleasure.

    Say a writer is working on a novel. She’s likely to experience a certain amount of frustration and tiredness doing this, but when she’s finished, she’ll have something that gives her a sense of satisfaction.

    And don’t get me started on the notion that a couple who argues must not be truly happy together…

  • J_Enigma32

    It is, sorta.

    Poetic kenning is something like swan rad, or “swan road”, to describe the ocean. It’s a compound phrase that describes something more concretely than a single noun does. “Bee wolf” means bear; bears are associated with honey and so are bees, and “wolf” as a stand in for bear for two reasons: they either resembled wolves or were thought to be related, or an example of the Scottish Play in action (i.e., “bear” being taboo, because if you say the word, a bear will appear).

    “þu áspricest þín gifsceattes Englisc and Læden, ac áspricest þu sóþ Anglic?” ;-)

    —-
    Pronunciation guide (it’s easier once you pronounce it):

    þ = that
    sc = Ship
    æ = ladder
    c = kite

    Edit: Or what Jurgan said. That works too.

  • http://blog.carlsensei.com/ Carl

    Thanks for the thoughtful reply.

    1. Concerning hypotheticals. I’m not arguing against utilitarianism from a practical point of view, but from a theoretical point of view. Is it the correct theory of morality? Accordingly, theoretical problems count. A good theory of gravity should be able to handle the possibility of black holes, whether or not there are any black holes out there. Similarly a moral theory should, if the objectively correct theory, handle all cases.

    2. Your response to the utility monster problem is practical and good enough for use in everyday life but doesn’t address the underlying theoretical concern. It’s basically the Captain Kirk response to the Kobayashi Maru training simulation — you deny that there are no win scenarios and change the rules of the game to reflect that. I don’t think you can do that in this case. The crux of the dilemma isn’t that this situation will ever come to pass, it won’t, it’s that if it did, if there really were a no win scenario, then from an objective point of view the correct thing to do would be something that appears monstrous. Is that appearance false? It could be that are instincts are misleading us and the moral choice would be to please the monsters, just like our instincts mislead us about gravity or quantum physics, but I find it unlikely.

    3. Even if the monster doesn’t wreck utilitarianism, there are many other theoretical problems and gaps. Go back to my earlier post and all the other points I listed there. Even if utilitarianism can be defended as non-absurd, I don’t see any positive evidence for it apart from its simplicity.

  • http://blog.carlsensei.com/ Carl

    There are many, many flavors of utilitarianism. I’m primarily referring to classical hedonic (that is, pleasure based) act utilitarianism. Well-being utilitarianism has certain advantages but also has its own problems. I’ll agree that it’s better than hedonic utilitarianism though.

    In any case, there is no rule in any version of utilitarianism that you “not cause suffering.” Rather, and this is the heart of the theory, the rule is that the utility you cause must outweigh the disutility you cause to as great a degree as possible. You can do this many different ways, for example by saying that a little bit of pain counts ten times more than a seemingly equivalent amount of happiness, but in the end all utilitarianism must posit some sort of exchange between utility and disutility so that they can be put on a unified scale or else it’s not utilitarianism but some other theory.

  • http://apocalypsereview.wordpress.com/ Invisible Neutrino

    EDIT: Never mind.

  • http://apocalypsereview.wordpress.com/ Invisible Neutrino

    Well, if it helps, in a philosophy class we started with aristotelian concepts, which basically prescribe what men and women are/should do. We spent probably a month on utilitarianism (act- and rule-) and then went on to social contract theory and finished with “meta-” theories such as Marxian and Nietzschean critiques of then-extant moral structures.

  • http://shiftercat.livejournal.com/ ShifterCat

    In any case, there is no rule in any version of utilitarianism that you “not cause suffering.”

    That’s weird, because every discussion I’ve ever had about applying utilitarian ideas to the real world (previous to this discussion, anyway) focuses not on maximizing pleasure but on minimizing suffering.

    I’ll admit that I have very little interest in “pure” philosophy, only a keen interest in how a philosophy can or can’t be applied in the real world. Thus I tend not to read books whose focus is purely on philosophical theory. So I guess whatever version of utilitarianism I’ve previously discussed isn’t “formal” or “classical” or whatever… I kind of don’t care about that, because I figure no philosophy can survive contact with the real world and remain undiluted.

  • http://blog.carlsensei.com/ Carl

    Agreed.

  • Kenneth Raymond

    Any scientific model needs to take into account what’s actually present. It should be able to make predictions about what should follow from existing explanations for what’s known, and be testable from there… and predicting things that we can’t find evidence for (especially the evidence that should be there) is not really a good sign for the theory. If black holes didn’t actually exist (“whether or not there are any out there”), then it doesn’t automatically follow that a good theory of gravity would account for them.

    Basically, that particular analogy isn’t far off from saying, “a good theory of evolution should be able to handle the possibility of fire-breathing dragons.” Er… not really.

    Your whole point is predicated on morality being objective anyway, which is rather highly up for debate. As a premise, it’s kind of a non-starter unless you accept other supporting axioms about the source of an objective standard. I don’t accept any particular axioms that support the existence of an “objectively correct theory” of morality.

    Even utilitarianism is fuzzy and funny. It’s why it’s called an ethical calculus, not ethical arithmetic. Any discrete act’s utility will be variable from person to person, or society to society. It is, in fact, rather irrational – but rationally irrational. If you’ve got a good-enough model of another person in your head, you can predict how they’ll respond to an action even if it’s wildly different from how you would respond to it. You can predict its utility relative to them.

    Each person and society is so idiosyncratic that you can’t make a general universal rule set to maximize utility for them all, but you can study how individuals react and trends within society to predict the results you’ll receive from various actions. This means that to achieve an actual, effective ethical calculus you must study the effects of your actions in society and correct your actions and “equations” when your predictions don’t line up to evidence.

    In other words I like utilitarianism because it can be strongly evidence-based, not because it’s simple. You can even derive the Golden Rule from it if you’d like, as one can see how increasing others’ happiness inclines them to give you the same regard, creating an environment conducive to cooperative increase in utility.

    And to bring it back to hypotheticals again, as BaseDeltaZero pointed out, you can construct a hypothetical to break down any existing philosophy or model. “But what if Smaug existed” breaks a lot of what we know about biology, chemistry, and physics. But it doesn’t mean that biology, chemistry, and physics are in the wrong – it means giant, flying, fire-breathing dragons are a proposition that cannot be meaningfully discussed in terms of these disciplines because dragons are not and cannot be real. The question of Smaug existing is Not Even Wrong, not proof that science is false because it can’t account for him as a hypothetical.

    As I’ve said, I am by inclination and education a sociologist, one who is concerned with the applicability of any hypothesis on how people and our creations work. I find your insistence on a perfect objective morality to be, well, pointless. I don’t think an objective morality can exist, and you haven’t put anything forward about it beyond stating your wish for it to exist… so what’s the point?

    EDIT: to defang some regrettable word choice

  • http://shiftercat.livejournal.com/ ShifterCat

    You used the word “mushy” above, which sounds as though what you’re objecting to is a less-pure kind of philosophy, but now you’re agreeing that philosophical purity isn’t a good standard anyway when we’re talking about real-world application?

    I mean, this whole discussion started out discussing real-world progress in the first place, yes?

  • http://shiftercat.livejournal.com/ ShifterCat

    I got really irritated when teachers asked ethics questions whose entire premise depended on a situation so contrived it was impossible: “You’re in a van racing to the rescue of a village when a child darts into the road and if you take the slightest instant to swerve out of the way, the entire village will die!” It was supposed to Make Us Think About Our Values, but it always made me think, “This is bullshit.”

  • http://shiftercat.livejournal.com/ ShifterCat

    Also: bringing up Smaug seems rather apropos, considering the title of this post. :)

  • Kenneth Raymond

    Needed a rather impossible creature and just kind of ran with it.

  • The_L1985

    Ah. I may be a mathematician, but I prefer the English of Chaucer to that of Alcuin of York.

  • http://www.oliviareviews.com/ PepperjackCandy

    I have a question about Latin and I’m not sure where to put it in the above discussion of Latin, so I’m going to stick it here and hope that someone can help me.

    My mom always said that the first lines of her Latin book were “Clara hidrium portat. Galba stet et Clarum spectat.” It had been years since she had had that book when she told me this, and it has been years since we last had that conversation (she died in 2006). Additionally, my Latin skills are somewhere between “not that good” and “nonexistent.”

    Aside from the obvious sexism in the sentences, which are supposed to say “Clara carries the pitcher. Galba stands and looks at Clara,” are those sentences grammatically correct?

  • http://www.ghiapet.net/ Randy Owens

    “stat”, “Claram”; other than that, it looks good.

  • alfgifu

    I thought the “bee-wolf” thing meant “wolf of the bees,” as in something that destroys bees homes, i.e., bear.

    “Bee wolf” means bear; bears are associated with honey and so are bees, and “wolf” as a stand in for bear for two reasons: they either resembled wolves or were thought to be related, or an example of the Scottish Play in action (i.e., “bear” being taboo, because if you say the word, a bear will appear).

    Given that we don’t have any contemporary commentary on the name or why it was chosen, it’s all guess work. The idea that ‘bee-wolf’ means ‘bear’ makes sense in the context of our knowledge of kennings and their use in Old English and in Old Norse. It’s the most common interpretation. However, given the poetic significance of names in general, the ‘stings like a bee, growls like a wolf’ point might apply as well – it’s not as though they’re mutually exclusive!

    I would say it is extremely unlikely that Beowulf (an Anglo-Saxon hero) and Arthur (an Old Welsh legendary king) share any direct connection via Arcturus.

    - The Old English name for the constellation Arcturus translates as ‘the wagon’, not ‘the bear’.

    - There’s no mention of any Celtic connection in the text of Beowulf, which is in any case set in Scandiavia, not in the UK.

    - Knowledge of Greek was rather patchy around the British Isles in Anglo-Saxon times.

    - The Arthurian legendarium didn’t really take on its complexity and significance until some time after the Beowulf manuscript was compiled (circa 1000 CE), which was in turn a fair while after the poem was originally composed.

    Wæs sie hal.

  • http://blog.carlsensei.com/ Carl

    The real world progress exists if you only care about utilitarianism and ignore the World Wars and assume that abortion is not a big deal (most utilitarians agree, but how to deal with potential humans is an open question in utilitarian ethics). The progress isn’t there, for example, if you think the chivalry is important or etiquette matters or children should be deferential to parents or subsistent farming is more ennobling than office work or sexual promiscuity is harmful or—I want to make it clear first that I personally disagree with this particular idea—if you’re a racial supremacist.

    My only point with this list is that the appearance of progress in the moral realm depends on the assumption that our current standards are better than past standards. We certainly think they are, but past people would surely have their own ideas about it.

    Mushy utilitarianism is a boone to the behavior of the average 21st century person, but I don’t think it’s the best theory of ethics.


CLOSE | X

HIDE | X