# Markets in Everything!

At Bayes Camp, we aren’t just doing estimating exercises in class.  At the beginning of camp, the organizers told us that we could start prediction markets in anything we liked.  A staff member would estimate initial odds that a proposition was true or false, and we could add subsequent predictions.  You gained or lost points proportional to how much your prediction was more accurate than the one right before yours (full explanation at the bottom of the post).

The markets started pretty quotidian (Proposition: it will be raining here on Monday at noon), but eventually shenanigans started up.   People realized you could place bets that would change the behavior of the campers.  After someone posted a bet on how many people would be present at the beginning of breakfast, another camper posted a sign-up sheet to bribe him to attend or stay away.

Anyway, one afternoon, a group of campers and I tried to throw one of the instructors in the pool.  We were unsuccessful, but vowed to try again, so someone opened a prediction market on whether we would manage to throw him in by midnight that night.  That’s when I started recruiting a posse.  I figured, if we got enough people, we could go over in a group and increment the odds up to 99% that he’d thrown in, dividing the interval up so we’d all make the same amount if we managed to throw him in.  And once we’d written down the  new estimates, we’d all stand to gain by throwing him in, and we’d be committing to work together.

“What if someone writes a higher estimate than it’s their turn to?”

“Then we all defect and they’re on the hook for their high guess.”

“Will that be enough of an incentive?”

“Ok, we’ll also all commit to throwing defectors in the pool.”

(I should note at this point that instigator and ringleader are very strong words.  If everyone around me has agreed it would be fun to throw a teacher in the pool, but hasn’t taken any steps toward that goal, and I’ve got some ideas, I see my role as that of a facilitator).

So we lurked outside when we heard the instructor coming back, and then surrounded him.  As the posse closed the circle around him, he asked, “What are you, a bunch of wolves?”  Then we lunged.  We wrestled him to the ground, but he turned out to be really agile and wriggled out of our grasp.  We were still between him and the house, so we managed to grab him a few more times, but, even with six of us to one, he kept managing to squirm away.  Finally, he managed to feint pass us and run up a tree.  While we stood around the base, he rested briefly and then suddenly leapt to the ground, rolled, and took off down the street, leaving us dumbfounded.

After we gave up and trooped back into the house, I pulled people together to read “Wolves” from Hyperbole and a Half.  (You should all open a new tab and read that now).  I find the story hilarious, but a lot of my fellow campers started looking ashamed.  They were identifying with the six-year-olds in the story, and they were suffering major moral pangs for running an instructor out of the retreat.

“Maybe we shouldn’t have kept chasing him,” one person said.

“Did we take it too far?”

I looked at my fellow conspirators conscience-stricken faces and came to a realization of my own.  I got up and snuck over to the prediction market board and shorted the market to .01 = True (thrown in the pool), .99 False.  If they were feeling that bad, there was no way they’d help me try to throw the teacher in when he came back to go to bed.  Luckily, they were too absorbed in their guilt to beat me to the board.

Pow! Instant utils for me.

(Assuming you think my increased chits outweigh the other campers surprise at my ruthlessness.  But I think the fact that one of them has nicknamed me “Wolf Pack” more than compensates).

Scoring metric: X = your estimate that the proposition is true.  Y = the person before you’s estimate that the proposition is true.  These range from 1-99 and are all integers.  (Zero and one are not probabilities)

If it actually turns out to be true, your score is: 100*log2(x/y)

If the proposition is false, your score is: 100*log2[(100-X)/(100-Y)]

Leah Anthony Libresco graduated from Yale in 2011 and lives in Washington DC. She works as a news writer for FiveThirtyEight by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

• Hibernia86

This game is probably a lot like the real life stock market. I’m sure that people aren’t above manipulating the stock market to their benefit if they can. Insider trading is illegal there, but not at your camp apparently.

As for the wolves article, I’m sure I would have started biting back after the second or third time.

• http://paraphasic.blogspot.com Elliot

If we lived in a world of Utilitarians, this story would be told to children by campfires.

• http://thoughtfulatheist.blogspot.com/ Jake

This is amazing. I call Leah’s team if Unequally Yoked ever runs an “Ideological Survivor Test” of any kind.

• leahlibresco

To screw with everyone, someone has started a prediction market on what percent of markets will be settled at “False” by the end of the week.

• http://last-conformer.net/ Gilbert

Unless there is a rule against that, there needs to a bet on the number of bets settling at “True” being even. Make baby Russel cry! (Though the camp-leaders have a simple counter-move I’m not explicating right now.)

• deiseach

Sounds like you’re having fun

• Alex Godofsky

The market sounds awesome 😀

re: zero and one:
IMO 0 and 1 are not probabilities is really one of Eliezer’s worse essays. The whole point of Rationality, as far as I can tell, is to provide some useful tools for making decisions. He has a tendency to go well beyond that and make sweeping claims about metaphysics that aren’t really necessary for the core point – in this case “you shouldn’t be absolutely certain about anything; remember that time you were absolutely certain about something and turned out to be wrong?” You don’t need to completely revise axiomatic probability theory to do that.

• Ted Seeber

The sin of presumption counts in all forms of prophecy.

• Brandon B

I agree that there is something wrong about the way Eliezer is approaching probability. In his “Infinite Certainty” post, he distinguishes between the real world and our predictions about the real world, and I can understand the whole “You can never be 100% certain” bit, regarding predictions. The real world, however, can be certain. When a pitcher starts throwing a ball towards home plate, there is some uncertainty about how fast it will go (tangent: http://what-if.xkcd.com/1/). What is certain (probability = 1) is that the ball will not go faster than the speed of light. Yes, if you switched to the log scale, the probability becomes infinite, but so what? “Infinitely certain” is a good description of the probability of the ball traveling at less than light speed.

• Ted Seeber

But you can’t be *absolutely* certain that the ball won’t go faster than the speed of light.

Heck, the speed of light alone isn’t a constant. It’s much slower when traveling through quartz or water.

• Brandon B

Two things.
1) There are plenty of good mathematical concepts that can’t be instantiated perfectly in the real world. I don’t think we’ll ever create an environment that is actually zero degrees Kelvin. Nevertheless, “absolute zero” is a useful concept. Similarly, since the fabric of space-time is distorted by gravity, Euclidean geometry doesn’t quite fit the real world. Nevertheless, it is conceptually very useful. So saying that 0 and 1 aren’t probabilities strikes me more like saying zero isn’t a counting number; true, it’s hard to count ‘zero’ objects, but this is a technicality, and the concept of zero historically played a revolutionary role in mathematics.

2) “I can’t be absolutely certain that the ball won’t go faster than the speed of light” is an epistemological statement. It’s not a statement about the world, but rather a statement about what I know about the world. When I studied probability, I naturally conceived of it as statements about the world, not about what we know about the world. This is the same distinction that alluded to before from this post: http://lesswrong.com/lw/mo/infinite_certainty/. Yudkowsky calls it distinguishing between “the map and the territory”.

I agree that the map (own our predictions) may never be absolutely certain, but that’s a statement about the map, not the territory. When Leah snuck back to the board to make a new prediction, she didn’t actually change the probability that the instruction would get thrown in the pool. The real-world probability had already changed, because the group was feeling guilty. What Leah did was change her estimate about the real-world probability.

Yudkowsky’s own example seems pretty good for this: the probability that 2+2=4 is always 1. If bookies were taking bets on whether 2+2=4, they might set the odds at 99.99%, and they might get some people to bet against it, but 2+2=4 is still true. It doesn’t matter how bad I am at arithmetic, or how radically skeptical I am about “truth” or “mathematics”. Not only is mathematics objective, it is objectively absolutely certain. The territory can be certain, even in the map is not. I think Yudkowsky is making a mistake; he seems to talk about probability as being only the map.

• Dave

Wait, why are they scaling by log base 2? Wouldn’t you want to natural log it if you were? And I’m not sure I get the intuition behind why you’d logarithmically scale payouts in the first place; can you elaborate?

• leahlibresco

I’m not sure how they picked the base; I’ll try to remember to ask.

I think the scale is logarithmic in the first place because of this explanation:

In probabilities, 0.9999 and 0.99999 seem to be only 0.00009 apart, so that 0.502 is much further away from 0.503 than 0.9999 is from 0.99999. To get to probability 1 from probability 0.99999, it seems like you should need to travel a distance of merely 0.00001.

But when you transform to odds ratios, 0.502 and .503 go to 1.008 and 1.012, and 0.9999 and 0.99999 go to 9,999 and 99,999. And when you transform to log odds, 0.502 and 0.503 go to 0.03 decibels and 0.05 decibels, but 0.9999 and 0.99999 go to 40 decibels and 50 decibels.

When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.

Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.

• http://last-conformer.net/ Gilbert

Base 2 allows for an intuitive interpretation as the number of bits of information one would need to rationally file that probability change.

Of course the base doesn’t really matter because logarithms to different bases are proportional.