So, JT asked:
3. You undoubtedly have a logical proof of some sort for a moral lawgiver. What is it?
No, I definitely don’t have a modus tollens, modus ponens style justification for my new position. I didn’t have one for my old position, and I doubt JT’s got one for his metaphysics. As the name suggests, metaphysics are hard to test.
So I end up approaching the problem from both sides. I look for things I’m really confident in or that I’m willing to presuppose (e.g. other minds exist, arbitrary murder is wrong, the worth of my life is at least within an order of maginitude of the worth of any randomly chosen human). These are first principles, and they’re pretty hard to prove (logically or otherwise). I’ll try and knock out a slightly less scattered list at some point in the future.
Imagine I’m trying to map out the ocean floor. These first principles are my soundings of depth. They may not be taken in a very systematic way, and there may be regions where I can’t take any soundings at all (the water’s too dangerous, my rope’s too short, etc). But this is the dataset I want any proposed map of the ocean floor to match.
The trouble is, I can end up with a lot of possible maps that satisfy these conditions. So now I start making some judgements about what a good map looks like. Maybe good maps of the ocean floor don’t have jump discontinuities. Maybe good maps are self-similar at a lot of different scales. This helps me pare down the list of possible maps, but the good map criterion is also hard to prove. Some of it is aesthetic, though I might also notice that certain principles of good maps do better than others at anticipating future soundings.
And that brings me to one of the biggest ways the ocean floor analogy isn’t a great stand-in for “How do you pick a metaphysics?” When we’re talking about moral philosophy, we start out with soundings in a lot of places. There isn’t much uncharted moral territory (by which I mean, places where we don’t have a preference between two outcomes or courses of actions). And some of the uncharted spots are boring and unhelpful. (Is “there are two identical twins unconscious in a pool and you can only save one, which one?” really going to help you distinguish between competing metaethical theories, or is it going to burn through working memory energy to little use?).
Sometimes, the closest I can get to new soundings is moral questions I change my mind about. Let’s say I used to prefer X, and now I prefer not-X, and I’ve been considering two metaethical systems: A and A’, which output as correct ‘X’ and ‘not-X’ respectively. Now that I’ve changed my mind, I should increase my expectation that A’ is true, since it was right before I was. (Note, this doesn’t work if you’ve adopted not-X because you became convinced A’ is likely. That way lies a feedback loop and madness).
And, at the abstract level at which I’m writing this post, that’s how Catholicism won me over. It matched a lot of the soundings I’d already taken, it predicted some measurements I later found myself having to revise, and it looked like a tolerably good map. It seemed like a better map than my virtue-ethics atheism. I suspect the place where JT and I disagree the most is not about the good map criteria, but about the moral soundings dataset I was trying to match up to a plausible map.
Now, some people will say, why choose a map at all? Why not just plot all the soundings and use that? (This is the “Why truck with metaethics?” objection). There are a couple of reasons. First of all, when you pare off all the most uncertain bits, but the tattered map you’ve got left isn’t always going to be of much use. It’s good to come up with some kind of schema for thinking about the gaps; you probably aren’t indifferent between all possible depths.
Second, there’s something else I need to add to this analogy to make it resemble moral philosophy a little more closely. Imagine you’re checking ocean depths with a really sucky rope. I don’t just mean that your observations are a little noisy (though they are) but there’s a whole list of only recently-known failure modes for your instrument (and we’re not particularly confident that we’ve sussed out all of them). Thinking at the map level might make it easier to spot when your observations are buggy. You don’t want to take all your observations at face value.
Finally, it’s easier to have a discussion/debate with someone at the level of maps (and goodness of maps) than it is at the level of intuition datasets. Stirrings of conscience are an internal process; your interlocutor can’t watch you measure out fathoms of rope. So it’s easier to thrash things out when you can switch back and forth fluidly between predictions and theory (often using thought experiments).
And just one more thing: it’s at the map level that you get the concept of an ocean, not just a collection of points in xyz coordinate space.
Where does that leave me? Well, it’s possible that Catholicism/virtue ethics/teleology is a useful map, but it’s got some major deviations from the territory it’s supposed to depict (say, the existence of God). I don’t discount that possibility, and I’ve flagged some really weird observations it predicts that my intuitions still don’t resemble. But the map has proved helpful, so I expect the territory to have some level of correspondence. (Ptolemy’s epicycles got a lot right. You wouldn’t expect the model that replaced it to lose the ground it covered).
I don’t know that I’m right, but I think I’m less wrong. I think I’ve burned out some errors in my model of the world that won’t crop up again even if my confidence in the God map dipped below some critical threshold and another explanation surged ahead.