Yesterday, I linked to Luke Muelhauser’s commentary on the inability of philosophers to come to consensus. He’s continued on the topic, proposing a curriculum for building better philosophers in “Train Philosophers with Pearl and Kahneman, not Plato and Kant.”
His list of recommended topics include Bayesian statistics, machine learning, mathematical logic, game theory, cognitive neuroscience, etc. (Go to the link to see his syllabus). When you look over all the prerequisites, you can see why Luke concludes, “I do think philosophy should be a Highly Advanced subject of study that requires lots of prior training in maths and the sciences, like string theory but hopefully more productive.”
Luke’s approach made for a really interesting contrast with a recent post on Just Thomism:
[T]he fact is that settled questions involve our indifference to finding out the answers for ourselves. To the extent that some question is settled, we’re usually uninterested in going back and seeing the arguments for it, even when the arguments are demonstrative. But philosophy deals with the sort of questions that individuals want to answer for themselves – even where philosophy has demonstrations to give it still has to give them entirely from the beginning to each person in each new generation. Fundamental questions about God or evil or human goodness or the human mind will never be settled simply because there is something inhuman in thinking we could settle them in such a way that subsequent generations would have to take our answers for granted as opposed to working out the whole problem from the beginning for themselves.
Philosophy can’t ever advance because the whole point of philosophy is that everyone gets to start at the beginning, so far as this is possible. There is still a role for discipleship and moving through a pre-determined order of questioning, and by my own lights there are even some pretty-much-settled philosophical questions, but ultimately philosophy is about getting to the bottom of things for yourself, and so it is not supposed to progress much farther than the progress that one person can make in his own lifetime.
And a commenter on LessWrong hit on a similar point:
[I]t would be greatly beneficial if science were kept secret. It would be wonderful if students had the opportunity to make scientific discoveries on their own, and being trained to think that way would greatly advance the rate of scientific progress. Making a scientific breakthrough would be something a practicing scientist would be used to, rather than something that happens once a generation, and so it would happen more reliably. Rather than having science textbooks, students could start with old (wrong) science textbooks or just looking at the world, and they’d have to make all their own mistakes along the way to see what making a breakthrough really involves.
This is how Philosophy is already taught! While many philosophers have opinions on what Philosophical questions have already been settled, they do not put forth their opinions straightforwardly to undergrads. Rather, students are expected to read the original works and figure out for themselves what’s wrong with them.
For example, students might learn about the debate between Realism and Nominalism, and then be expected to write a paper about which one they think is correct (or neither). Sure, we could just tell them the entire debate was confused, but then we won’t be training future philosophers in the same way we would like to train future scientists. The students should be able to work out for themselves what the problems were, so that they will be able to make philosophical breakthroughs in the future.
I don’t need to understand calculus to catch a ball (though as it happens, I’m better at the former than the latter). I wouldn’t need do more than addition to be able to avoid being fleeced if all I had to do was make change, but, in an age of collateralized debt obligations, I either need to learn a lot about risk really fast, or I need to set up a pretty trustworthy, reliable gate keeper. Since we have reason to suspect (for theological or purely empirical reasons) that our moral reasoning still has a few bugs to work out, don’t we need a class of philosopher-regulators?
But it’s a little weirder to say that I’m comfortable deferring my understanding of the good life to a class of academics, even Kahneman-reading academics. First of all, as I mentioned when discussing Thinking Fast and Slow, Kahneman’s work can identify inconsistencies in our reasoning without telling us which element to jettison. But the biggest objection isn’t that professional philosophers aren’t good enough at telling us the answers; it’s that we feel a particular duty to be fluent in these answers ourselves.
But, no one (except sometimes me) says that learning math is necessary to build character. Learning math is part of having a more accurate picture of the world, and that’s awesome. But it doesn’t tell you what to do with your data. That falls into the realm of practical wisdom (what Aristotle called phronesis). And developing phronesis isn’t the kind of thing you want to outsource, anymore than you’d like to turn over your relationship with your spouse to a data-mining algorithm that was better than you at predicting what response in a conversation would make zer love you most.
A virtue ethicist wants to become the good person, not just look up what the good person would do and then do it. That means we need to sharpen our moral perception just as we might strengthen muscles. We can’t be like a student plugging formulas into a calculator without an idea of how they work conceptually. Otherwise, we’re not becoming better people; we’re just giving up our moral agency.