Irrational with Respect to What?

In Thinking Fast and Slow, Daniel Kahneman draws on a lot of empirical studies where subjects make clearly irrational decisions.  A choice throws an exception in an otherwise functional heuristic, and the subject takes an action that doesn’t promote his or her stated goal.  But one of the studies Kahneman cites doesn’t seem to fit into this model.

In the experiment, subjects placed a hand into painfully cold water and had to keep it there for 60 seconds.  After a break, they put their other hand into an identical water bath, but after the 60 seconds were up, they kept their hands in for an additional 30 seconds and the temperature of the bath was raised so that it was still uncomfortable, but not as painful.  (For the methodology nerds: the order of these experiences were randomized, the right/left assignment to the treatment conditions were randomized, and the subjects were just told they needed to keep their hand in the bath until the experimenter said to take it out.  The subjects didn’t have the description of the differences between the conditions)

After experiencing both conditions, the subjects were told that they had one more round, but they could pick which condition they would repeat: the one they’d had on their right hand or the one on their left.  Overwhelmingly, the subjects chose to pick the longer trial, even though it was an extension of the discomfort in the shorter experience.

This kind of preference is known as the peak-end rule.  We don’t tend to remember experiences as the average of pleasure and pain, multiplied by the length of the experience.  Instead, we seem to rank them according to two criteria: how bad (or good) was the most intense part of the experience and what did we feel right before the experience ended.  Both water baths had the same unpleasant peak, but the longer condition had a less awful feeling right before the trial ended, so it was preferred.

Kahneman found this heuristic unnerving:

Decisions that do not produce the best possible experience and erroneous forecasts of future feelings — both are bad news for believers in the rationality of choice. The cold-hand study showed that we cannot fully trust our preferences to reflect our interests… Tastes and decisions are shaped by memories and the memories can be wrong.

This sounds like a weird objection.  We know that humans lay down memories according to the peak-end rule.  Is it really rational to optimize according to how we think we should record and rank experiences rather than how we actually do?  Eliezer Yudkowsky writes about this kind of problem in his analysis of Newcomb’s Problem (it’s a little long and complex to excerpt, so I recommend just reading the whole thing).

The first place I ran across an example of the peak-end properties of memory was in a discussion of colonoscopies.  By leaving the probe in the patient for an extra minute at the end of the exam, doctors ensured that the end of the experience was less awful than the more active parts of the exam.  Patients in this condition remembered their exams as less unpleasant than did the patients whose doctors hadn’t extended the experience.

So what I’m wondering is, if Daniel Kahneman were going in for a colonoscopy, which technique would he prefer the doctor use?  Would he really rather feel reasonable than feel comfortable?

I guess Kahneman could sidestep some of these problems if he’s willing to operate in the constraints of how his memory works now, but plans to go full-bore transhumanist and work on rejiggering how human memory encodes sustained sensation.  I really doubt that’s his project, but if it were, I’d be really interested in what criteria he or anyone else would use to compare different memory schema.

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • Maiki

    I guess there is a problem here considering the “objectively shorter” scenario as the more rational scenario when the perception of pain is by definition subjective. If the longer trial was experienced as less painful, then it is the rational choice. It is like arguing it is better to have a short surgery without anesthesia vs. a longer one with, ignoring the amnesiac effects of general anesthesia is inconsequential to the decision making process. Just because the “amnesia” in this case is a natural response from our bodies doesn’t discount that it makes the experience more pleasant. It is not an irrational choice to take into account our subjective experience if our goal is to make the subjective experience more pleasant.

  • Touchstone

    Presumably, your actual subjective experience includes more pain in the longer, gentler end case. You just remember it as less painful. It seems rational to choose a subjective experience in which you will experience less physical but recall more physical pain, given that, unlike the subjective experience of physical pain, the recollection of physical pain doesn’t itself necessarily hurt.

    For example, consider a drug that completely inhibits long-term memory formation without inhibiting subjective pain perception (things very close to this do exist!). Suppose you say to me: “I’ll give you $1000 if you take the drug and then let me waterboard you for an hour.” I wouldn’t do it. Sure, 5-hours-from-now me will recall no pain and have $1000, but 30-minutes-from-now me is going to be in excruciating pain, and I care about him, too.

    • leahlibresco

      I didn’t really have a way to talk about the split between the experiencing self and the remmebering self in the context of this post, so thanks for the excuse, Touchstone. To start with, your experiencing self does benefit from better memories down the road since the subjective pain of remembering them is lessened. I have no idea what the conversion rate between subjective experience of present pain and subjective experience of remembered pain should be, so I wouldn’t argue that that fact makes the trade-off worth it.

      In the case of necessary, repeated medical screenings, I’d privilege the remembering self, since that’s what I’ll draw on in order to muster the willpower to schedule the next screening. For a one-off occurrence, I might be more willing to accept remembered discomfort instead of immediate pain.

      I think the general way I’m thinking about this is that I tend to care a lot more about what I do with sensory experiences than how they make me feel. (My virtue ethics are cropping up). So I discount the experiencing self a lot. I’m not sure that attitude is justified, but I’m also not sure how to calibrate these preferences.

      • Touchstone

        This makes sense, and I generally agree. My point was that with pure physical pain, there’s at least a case to be made for weighing the immediate subjective experience against the future remembered experience. How they balance will vary based upon a number of factors, but given that there is a tradeoff, Kahneman has a case (a stronger one in this particular case, given that the pain in question is purely physical — there’s no real psychological element — and pointless) that our intuitive behavior is irrational.

  • Jonathan

    If the doctors were really clever, they would take out the probe immediately and convince the patient that the minute after the procedure is the tail end of the “experience”.

    I don’t think the analogy to Newcomb’s problem holds. Khaneman is disputing that the test subjects chose the best available outcome, saying it is worth trading off the worse memory for the better experience.

  • keddaw

    Regardless of the interesting metaphysical issues, doctors have an oath to do no harm. Objectively, keeping the probe in is unnecessary harm and so they shouldn’t do it.

    This tracks with my thoughts which are that if there is an objective measure and our subjective experience of it is wrong then too bad for our subjectivity. We should work on making our subjective feelings match the real world, not shape the real world to match out incorrect subjective feelings.

    • leahlibresco

      Is it really unnecessary harm if your patient’s memory is more positive than it otherwise would have been?

    • Anonymous

      Many things doctors do involve (hopefully) small short-term harm in hopes of fostering larger long-term benefit. The challenge is to show that this type of short-term harm doesn’t qualify.

  • SAK7

    As respects colonoscopies… having had several let me tell you that you are knocked out before any probe comes near you, and awaken in a recovery room. The worst part of that lovely process is undertaken the night before. Trust me.

    I wonder, however, how this may reflect upon childbirth and motherhood. Some women have multiple children, other just one… and point to horrific pregnancies and childbirth as the reason for the decision to curtail further children. I presume, as a male, that most experiences are generally similar across a wide number of cases… perhaps it’s not the birthing but the rearing of the infant.

    Another thought is what role, if any, post event stimuli may play here… for example, notice how the Clinton presidency seems to be fondly remembered these days?

  • Ray

    Well, it seems like asking after 60 seconds of the long condition whether the subject wants to take his hand out might provide a counterpoint to the study. I’m pretty sure the subject would choose to remove his hand if given the opportunity at the time. So if this is true, we would either have to assume one condition or the other produced an irrational response, or we would have to assume that asking the question 60 seconds into the long condition is a significant enough alteration of the subject’s subjective experience to justify the different response — e.g. asking the question draws the subject’s attention to the pain.

    Either way, it’s definitely a surprising result.

  • keddaw

    Leah, real harm occurs even when our memory is not accurate about the facts (e.g. date rape).

    We MUST act on facts and hope our brains will catch up, otherwise we’d use homeopathy, faith-healing and crystals in our health system as they make people think they’re better when it is, at best, placebo.

    • Anonymous

      Your examples are all things for which we have an objectively better solution… which also have subjectively better outcomes. No one debates that (objectively better + subjectively better) > (subjectively better + objectively worse). The question concerns (objectively better + subjectively worse) ?? (subjectively better + objectively worse), and the varying degrees of better/worse (as well as their variation over time).

      If someone were to have to choose between not remembering an actual date rape or having vivid persistent memory of a date rape that didn’t happen, which would they choose? I can imagine different answers from different people… and a lot of people asking questions like, “How physical was the actual rape… did it leave damage/disease/pregnancy? On the other hand, how brutal is the memory? Is it going to leave me a PTSD-ridden shell of my former self, forever unable to resume my former life… possibly committed to an institution?” These questions sound like attempts to compute a long-term objective measure, similar to Leah’s thought above about how the subjective colonoscopy experience might affect the likelihood of choosing to do it again.

      • keddaw

        My point is that if we are to trust science, including medicine, then we must use objective facts about the world and hope our brains catch up, otherwise we start harming people without their knowledge/consent because we know what’s best for them. That is a slippery slope best avoided.

      • keddaw

        There is also the problem that if/when our brains do catch up we have procedures in place that harm people for no benefit and it is often harder to alter the long-held practices of experts in a field that requires much training to become an expert that it is to educate consumers of the service.

        • Anonymous

          So you would be in favor of abstinence-only education, damn the current consequences… we just have to hope that our brains catch up?

  • keddaw

    Anon, that… doesn’t even make any sense. Our brains can’t catch up with reality by denying them knowledge about reality.

    • Anonymous

      It’s not really denying knowledge about reality. It’s acknowledging, “This is objectively the most effective method to avoid sexually transmitted disease and unwanted pregnancy.” It’s not our problem your brain hasn’t caught up yet. Do you see that the point is that pragmatic solutions may just be that: pragmatic solutions that don’t necessarily fit our ideal for how we might think we should be, but account for how we see that we actually are. Maybe you think we should figure out a way around the peak-end rule, but pragmatically, we haven’t.

      Abstinence-only education denies people the knowledge of reality like colonoscopy doctors deny their patients knowledge of the reality of the peak-end rule. In one case, you’re happy saying, “This is the way it actually happens. Let’s accommodate for it.” But only one case.

      • keddaw

        No! I’m advocating giving people full information and allowing them choices (even the wrong ones) about their own bodies.

        I’m also advocating that doctors maybe practice the Hippocratic Oath and not cause unnecessary harm. Harm in this case being an objective fact and ignoring the pragmatic, subjective opinion of non-experts (i.e. patients).

        In the case of abstinence education, I think that is valuable and accurate information. Abstinence only education is stupid, inaccurate and unrealistic.

        In the case of the colonoscopy the doctor is electing to inflict medically unnecessary harm on a non-consenting patient. Regardless of the doctor’s thoughts on the long-term good of the patient, that is a rather simple case of assault.

        • Anonymous

          So you’re fine with letting patients choose the extended colonoscopy given the information? And if people overwhelmingly choose it, it may become typical advice and practice. This conflicts with your complaints about long-held practices… especially for a placebo.

          That’s all I was trying to show. I just had to tickle your anti-reflex to get you there.

          • keddaw

            I’m fine with patients choosing it, I’m not fine with doctors offering it or performing it.

            If doctors perform the extended colonoscopy WITHOUT consent it’s simple assault, if they perform it with consent it goes against the oath they took as doctors.

            If patients are aware of their shortcomings about recognizing reality and want to trick their brains then so be it. I’d rather they tried to fix their brains to recognize reality, but sometimes that isn’t actually possible.

  • Anonymous

    I’m still missing any reasoning for why it goes against their oath. The scientific understanding of the body includes the peak-end rule. It is not pseudoscience. It is reality. If we discovered that pain receptors themselves act in a seeming paradoxical manner, would you ignore that scientific reality also because you didn’t like it? If it’s simply that you cannot get over that short-term current harm is being done, then how can you justify nearly any painful or invasive procedure? Remember that you’re ignoring any and all future benefits (even ‘real’ ones).

    • Anonymous

      Maybe we should try a thought experiment. Suppose there is an uncomfortable or painful procedure. At some time in the procedure, a doctor could administer a drug which keeps them from having a memory of the pain or discomfort. Suppose further that this drug causes no pain itself. Would you allow it to be used?

      Then, suppose administering the drug requires a needle, so it causes the slight harm of a pinprick. Is it allowed? Then, suppose it causes the slight harm of a pinprick and a mild burning sensation. Is it allowed? Continue ratcheting it up. At what point do your emotions take over and you feel compelled to object?

      • keddaw

        “At some time in the procedure, a doctor could administer a drug which keeps them from having a memory of the pain or discomfort. “
        Like, general anesthetic?

        “Would you allow it to be used?”
        I am not, and have no wish to be, in charge of what people do. But to answer your question I have no logical objection to the voluntary use of a drug that reduces the memory of pain.

        “At what point do your emotions take over and you feel compelled to object?”
        It is not an emotional reaction, it is simply holding doctors to their oath, the original had the line “… and never do harm to anyone.”

        Excluding the oath then it might be acceptable to offer patients the option of accepting increased unnecessary pain to reduce the memory of it. I wouldn’t accept this, but it isn’t my place to deny consenting adults the choice. It is my place to stop this becoming standard practice as that may impact me at some point without my consent.

        It reminds me of when vaccines were first introduced and people wouldn’t/couldn’t believe that such a harmless thing could prevent such serious diseases so they added mercury and other things so it caused pain and suffering so people would believe it was working. Should this still be standard practice, or can we agree that our brains have adapted to accepted that harmless can be effective?

    • keddaw

      Anon, the peak-end rule is reality, but it impacts our memory of pain rather than the actual pain experienced. The only remote justification for applying unnecessary pain is to reduce the danger that patients will not return for a procedure due to their (incorrect) memory of how painful the procedure was. As someone who appreciates reality I’d much rather doctors (of all people) didn’t cause me unnecessary harm AFTER a procedure due to some notion of how my memory of the procedure might impact my future behaviour.

      • Anonymous

        And just to be clear, this applies to all levels of unnecessary pain, including a pinprick, no matter how painful the necessary procedure may be or how much you, as a patient, want the drug?

        If so, I believe the baby has met the bathwater, and we’ve found the point where you seemingly irrationally hold to a theoretical principle in the face of overwhelming pragmatic sense. …which I think was the whole point of this post. When reality seems irrational, we’re going to end up acting irrationally in some sense.

        • keddaw

          To (attempt to) reduce future pain/damage, doctors have made a pragmatic change to the principle allowing surgery, re-setting of bones, injections, chemotherapy etc. I believe my previous comment, posted after your response above, makes that clear.

  • Anonymous

    Keddaw,

    You don’t want to be the arbiter of what people do, yet you don’t want it to become standard practice. These statements obviously conflict in the limit where all other people choose the drug.

    I’m still unsure where you stand on the oath. At some points, you seem to take a really hard line, yet you’ve also mentioned that they’ve made exceptions. Can you just answer whether a pinprick for the purpose of forgetting a painful experience would be allowed under your interpretation of the oath (not making you a universal arbiter here.. just want your personal interpretation of the oath).

    The question of vaccines is an interesting one. To me, it seems to force the discussion into a question of dualism. The problem with vaccines seems to be a problem of conscious reasoning, while the problem with colonoscopies seems to be a problem of deeply embedded machinery. The former seems changeable by education and force of will, but the latter seems physically unchangeable.

    Just like I can will all I want for my pain receptors to not fire when encountering a burn and have no success, I feel like I could will my amygdala to abandon the peak-end rule all I want without effect. Without some sort of theoretical dualism drawing a dividing line, I’m not sure I can say, “Will yourself to believe that vaccines work” any more than I can say, “Will yourself to feel no burn-based pain.”

    Of course, I would like to have some sort of dividing line, so that I can put these two in different categories. Then, the question would be which side of the line the peak-end rule falls on. Using my mathematical tools, I suppose the empiricist could try to measure the likelihood of behavior-modification across the population and then draw some sort of probabilistic or timescale-based demarcation… but this may leave things even fuzzier, especially as we argue over where the demarcations should be placed. And of course, if this is right, it’s guaranteed to be time-dependent, which means there’s likely no theoretical ground we can retreat to.

    …at the very least, I have something new to think about. As if I needed something else to distract me as the semester comes to a close…

    • keddaw

      “yet you don’t want it to become standard practice”
      Standard practice means everyone gets it unless they specifically object. I have no problem with people choosing painful/harmful things that have only a subjective future benefit. It is the breaking of the oath (as I recall it, the modern one appears to eschew the ’cause no harm’ part) and the possibility of it becoming SOP, without consent, that I object to.

      “just answer whether a pinprick for the purpose of forgetting a painful experience would be allowed under your interpretation of the oath”
      It would not. Neither would it pass my pragmatic interpretation of the oath which only allows for a(n expected) utilitarian net improvement in causing harm to a patient. However, take the oath out of it and I have no problem with doctors* offering it or patients accepting it – as long as it isn’t pushed or treated as ‘the right thing’ for a patient. Much like circumcision of male infants is in the US at present.

      “the problem with colonoscopies seems to be a problem of deeply embedded machinery”
      Nope. The problem is with the fact that pain (in an individual, over time) can be measured objectively enough to show that doctors are inflicting pain that has no medical or psychological benefit – assuming patients not having the additional pain at the end (pun definitely not intended) are not traumatised by the increased remembered pain.

      “a question of dualism”
      Not at all. While that might be an interesting discussion on some other medical issue, this one is clear. People simply have to be aware of how much pain was actually administered, in spite of the peak-end rule, rather than their subjective feelings on the matter. Engage their rational brain and over-rule their amygdala. We do it for any number of things**, there is no reason to think this situation is exceptional.

      *Although at this point I’d have to adjust my personal opinion of doctors and their profession.

      ** Look on your wing mirror – there’s probably a sticker saying things may be closer than they appear. Surgeons cut into living flesh, not an act our non-rational brains would allow us to do. Firefighters rush into burning buildings. People save for retirement. etc. etc.

      • Anonymous

        So you think there is zero utility in not remembering a painful experience? Ok. I suppose you could think that. It’s pretty much what you have to commit to. I just wanted to make sure.

        I’d like to inject one more example: emotional pain. Would you claim that there is zero harm in emotional pain, which is purely memory driven? Could the memory of a painful experience be so great as to cause actual harm (my example earlier where a rape leaves “a PTSD-ridden shell of my former self, forever unable to resume my former life… possibly committed to an institution?”)? If so, would there be utility in reducing this memory?

        You’ve given a slew of examples which are pretty obviously in the realm of conscious thought, yet you’ve merely asserted that one can overrule the amygdala and change your perception of pain. Can you do this for all fear/pain processes? If so, why are we even using harm or pain as a theoretical boundary in the oath… you can just overrule it. What makes the amygdala fundamentally different than the pain receptors it receives signals from?

        • keddaw

          I really don’t know how I can make this much clearer:
          “So you think there is zero utility in not remembering a painful experience?”
          No, the utility gained by not remembering is a subjective utility which I’d rather not have standardised or forced on people without consent. I think it should be an individual’s choice whether this subjective improvement is one they’d rather experience (or not).

          “Would you claim that there is zero harm in emotional pain,”
          No.
          “which is purely memory driven?”
          Because it’s not.

          “Could the memory of a painful experience be so great as to cause actual harm”
          Yes, absolutely. Which, if I may quote myself: “[I am against doctors inflicting unnecessary pain assuming] patients … are not traumatised by the increased remembered pain.”

          “you’ve merely asserted that one can overrule the amygdala and change your perception of pain”
          No I haven’t. I’ve said one’s rational brain can overrule any implanted irrational mini-phobia created by the false memory of greater pain. e.g. If I KNOW pain was X but my memory of it is 2X then my basic brain functions are trying to have me avoid any pain > 1.5X. With the objective knowledge of the actual level I can override the natural instincts and return to the doctor/dentist/football field.

          Memory of pain does not equal pain. Memory of pain affects future actions. It would take a strong case for future actions to be so negatively impacted that we should (without consent) increase actual pain/harm at time T from X to Y, for no current medical benefit, to avoid the memory of pain X from impacting a decision at time T+1. I’m not saying such situations don’t exist, but the evidence would have to be very convincing.

          • Anonymous

            I can tell you’re getting frustrated with me, but I want to thank you for enduring. I wouldn’t be discussing it if I didn’t think you might be onto something interesting… and thus want to test it and see what I can learn.

            You now claim that there is utility, it’s just subjective utility (until, of course, it’s reached a subjective threshold of traumatization). Wouldn’t that subjective utility justify allowing a doctor to offer the service without running afoul of the oath? Would plastic surgery fall under this category, too, or is that benefit more objective?

            I don’t understand your statement that emotional pain is not purely memory driven. Can you explain this? If you have a good subclass of non-memory-driven emotional pain, then just pull that out and repeat the statement with the subset of memory-driven pains.

            “Ya know, I could have caused you more pain and you would have remembered less pain.” “Oh, well that really changes my whole perspective. I must not have felt much pain after all!” That doesn’t seem to make much sense. My pain was X. My memory of it is on a different scale, so we’ll call it Z. We have no idea how the X scale relates to the Z scale. We just know that if you experienced pain X+delta (in a certain way), you’d have memory of pain Z-epsilon. Your basic brain functions at time T+1 only has access to the Z scale. Your brain really wants to avoid all pain, the only prediction of which comes from the Z scale.

            You override the data from the Z scale all the time in order to do things, but you can’t by fiat change the black-box mapping from X-values to Z-values any more than you can will yourself a change in the mapping from the output of pain receptors (we’ll call them R-values or something) to X-values. You can will yourself to ignore Z-values at a moment in order to do something… you can even will yourself to ignore X-values in the process of doing something. But you can’t change the mapping properties, no matter how stupid you think the map is.

            Memory of pain does not equal pain, but it can produce pain… and the more general harm. The case is not that hard. The patient finds the subjective utility outweighs the initial marginal pain. Given accurate and true information about reality, they choose which weighting of interests they prefer (remember that X-values and Z-values are on different scales… their computation of utility U is a weighted combination of X and Z (which need not be linear)). They can even change their weighting based on knowledge of the mapping properties. It would be harm to do it without informing them. It would also be harm to not respect their weighting, since they’re the ones who have to subjectively experience both moments T and T+1.

          • keddaw

            Not frustrated overall, just when parts that I think I have clearly defined are misrepresented/misunderstood. But I appreciate that you’re trying to understand me.

            “there is utility”
            There is.
            “it’s just subjective utility”
            No, the memory of an event is subjective. The actual event, and its effects, are objective.
            “Wouldn’t that subjective utility justify allowing a doctor to offer the service without running afoul of the oath?”
            We can skip the oath if you like, doctors (plastic surgeons) do many things that fall foul of the oath, with many justifying it on grounds that are … dubious*.

            “I don’t understand your statement that emotional pain is not purely memory driven.”
            When someone breaks your heart, or breaches your trust, there is a non-memory driven pain. It has effects that alter your worldview and can cause problems that extend long after the event. None of that is memory based. You can however have a memory induced minor breakdown based on what happened, but that’s a whole different ballgame.

            Your brain really wants to avoid all pain, the only prediction of which comes from the [memory] Z scale.
            Fortunately we have access to reason and facts in our higher brain functions.

            “It would be harm to do it without informing them.”
            Bingo!

            “It would also be harm to not respect their weighting”
            Not strictly true, but not worth quibbling about. Okay, it is. Patients’ nutty views rarely get much play in medicine, unless it will objectively get in the way of treatment or recovery. This is good.

            X vs. Z, see below.
            *And financially motivated.

      • Anonymous

        And I just realized that the rape example is a really good test case. If you put someone under general anesthesia and make sure that you don’t cause damage that will cause pain receptors to fire after they’re conscious again… and make sure you don’t leave any diseases/pregnancy, how are you claiming there is any harm? Suppose after they’re conscious, we show them a video of the event. Should they simply overrule their thoughts and say, “My theory says harm only occurs when pain receptors fire in a conscious human, so I’m alright with that.”?

        • keddaw

          This is a very emotive subject and may be difficult for some to read, especially as way too many women are raped in our society, but one has to be true to one’s principles so here goes…

          (What you are actually doing here is trying to lead me down the path of killing someone in their sleep isn’t a harm to them. Let’s see if I can walk you back up the path a little.)

          Making someone unconscious (or insensible) against their will – without a good reason such as a medically induced coma to minimise brain swelling – IS a harm. So right there we have the taking of someone’s autonomy harming them. Now the controversial part – anything you do to that unconscious body that is not noticeable after they come round does not increase the harm already done to them, at that point. However, to society at large you have become a much bigger danger and so you will be prosecuted for what you did regardless of whether the person ever finds out. If the person finds out then it massively harms them because, at the very least, it makes them lose trust in people, it makes it harder to get close to people in future along with various other psychological traumas.

          At this point people do overrule their thoughts, as part of the healing process, but it is perfectly rational to doubt all your relationships as one you trusted (assuming a date rape scenario) was so badly misjudged. It takes time and experience to get to the point where trust can come back into even the closest relationship (I assume – this is not something I have direct experience of, I’m just talking from a rational viewpoint). And it takes time for the trauma to be, if not overcome, then at least dealt with and some semblance of a normal life to resume.

          • Anonymous

            Suppose the subject willingly underwent anesthesia. Then, they’d be totally cool as far as the oath is concerned? It’s just society that has a twisted view of what ‘harm’ might mean… they irrationally conflate harm with probability of future harm, while the oath cuts out a definition of harm relying solely on X-values? I suppose this is consistent, but as I stated above, I’m not sure why we’re privileging the integral of X (or, well, some measure on X space) over other measures which may include other terms in the definition of harm.

          • keddaw

            Because, on your terms, X is real. Z only becomes real when it starts impacting X.

          • Anonymous

            …and this is the whole point. Z is real. It’s right there in the amygdala and everything. In fact, it is necessarily real by the fact that it can impact X. It’s not a nutty view. It is a biological fact… no matter how stupid you think the map is. You did not seem to complain above about the fact that you only have access to Z at future times. Of course you have reason, as well, but that doesn’t mean that Z is somehow not real anymore than X is not real because I can use reasoning to overrule it (which people most certainly do… I just played hockey last night… definitely ignored me some X values).

            Think about possible ways to drive Z way up. Sure, they could cause psychogenic pain, affecting X directly. They could also do what you mentioned… “cause a minor breakdown” (perhaps driving it higher could cause PTSD?). You seem to agree that this would be objectively bad, yet it’s certainly not reflected in X. If it were possible, would it be harm for a doctor to ‘infect’ you with schizophrenia? …but that’s not reflected in X. That’s really my whole point. Harm seems necessarily broader than X values. An aside is that Z values are real, biological facts whose mapping cannot be changed by conscious reasoning… and which may be eligible for some inclusion in a computation of harm.

          • keddaw

            Anon, if you’re not still around that’s entirely understandable, but I’ll finish my thoughts for completeness anyway.

            Z values are real in that they exist, but they are (possibly false) memories of harm and not harm. Actual harm only comes about when Z affects behaviour, and it is the altered behaviour that is the harm, not Z itself.

  • Anonymous

    X values are real in that they exist, but they are (possibly false) experiences of harm and not harm (I’ve seen no general rule which pain receptors “should” rationally follow… or an accompanying demonstration that they actually do follow this rule). Actual harm only comes about when X affects behavior, and it is the altered behavior that is the harm, not X itself.

    I see no reason why one couldn’t make the above argument. Like I said, I ignored plenty of X values to play hockey. Some people actively seek out X values in various types of pleasure. Even more interestingly, X can only (at best) alter behavior instantaneously… for any moment thereafter, only Z values are accessible.

    Final thought: I find it interesting that harm has finally been expanded wider than X values (to include altered behavior). This can nicely account for the idea of infecting someone with schizophrenia. However, it seems to drive a huge hole in the original goal, which was to restrict harm to only X values. I mean, how do we determine which type of behavior-modifications are harm? Is any source of behavior-modification capable of being harm, or just Z values? (I think the latter must be false… schizophrenia argument again..) In fact, we even start having problems with the old story of vaccines, because that was surely a matter of behavior modification.

    Even worse, informing or not informing someone of the peak-end rule when preparing to perform a colonoscopy might alter their behavior (i.e. how they choose to proceed)… but how will we ever decide if such behavior alteration is harm?

  • Pingback: Can you Cyrano de Bergerac your moral philosophy?

  • Pingback: cozy cove

  • Pingback: notepad bluebox

  • Pingback: xlovecam free tokens

  • Pingback: banana blue

  • Pingback: maryland divorce lawyers

  • Pingback: divorce attorney

  • Pingback: hearthstone arena guide


CLOSE | X

HIDE | X