As always, my posts shouldn’t be taken to be representative of Chris’s views, and remember to help out the Values in Action project, which will be packing 40,000 meals today for food insecure children.
Zach Alexander left a few lengthy comments in response to my discussion of his review Chris’s book. I don’t think either of us was convinced by the other, so rather than enter into a lengthy back-and-forth in the comments section, I decided it would be more productive to take a few of the points he addresses and open them up to broader discussion.
To counter Zach’s still unfounded idea that Chris doesn’t (Edit: seriously) value truth, I wrote:
Zach complains that Chris focuses too much on eliminating suffering but not ignorance, and to that I ask, if eliminating ignorance doesn’t make the world a better place then what’s the point?
Zach says that “these don’t sound like the words of someone with a passion for knowledge and truth.” He writes:
Eliminating ignorance is an inherent good. That doesn’t mean it’s the only good, or that it outweighs everything else. But it is a good in its own right, contrary to the implication of your question. I would therefore go even further – we should reduce ignorance even at the cost of slightly increasing the suffering in the world. Better to be Socrates dissatisfied than a fool satisfied; better to gain knowledge of the universe and be a bit depressed by its vastness, coldness, and ultimate meaninglessness, than to be ignorant and a bit happier. If that statement sounds bizarre (which seems like your favorite word), I rest my case.
Putting aside the slight mixture of condescension and presumptiveness (I studied almost exclusively philosophy and psychology in college, am doing neuroscience research now, and plan to go to graduate school to pursue a career in academia. If Zach doesn’t think I value truth or knowledge, he may be overestimating the pay and prestige that comes with a job as an assistant professor), there’s an interesting point raised.
Saying knowledge is an inherent good certainly sounds appealing, but it’s much more difficult to justify in practice. Having anything as an inherent good is a tough sell in a naturalistic framework. Most atheists seem to subscribe to some kind of Utilitarianism, though, where something like pleasure, well-being or preference-satisfaction serves as the fundamental good–the thing that is good in and of itself–and what is moral is what maximizes that good. In this type of system, anything else can only be instrumentally good–not inherently good, but good because it tends to bring about the fundamental good. It seems clear to me that knowledge is an instrumental good in this case, which means we should only be be pursuing it if it makes the world a better place (lucky for us, recent history seems to validate this for the most part, but of course we have limits–things like ethics rules for research exist for a reason). Knowledge as a fundamental good is a hard case to argue, and I’m having trouble thinking of any ethical system that might allow it. As always, I’m curious and open to be proven wrong.
But there is something uncomfortable about living, in Zach’s words, as a “fool satisfied,” rather than “Socrates unsatisfied.” I think part of this can be explained by noting that curiosity and a striving for self-improvement are instrumental goods and thus should be encouraged, but there are nonetheless some tough cases like the experience machine or willfully accepting a comforting illusion. But I think a lot of this confusion boils down to our ability to decide for ourselves. There’s something noble about a scientist or philosopher forgoing material wealth and happiness to uncover some deep truth about the universe, but it seems perverse to force that decision onto someone else. That is to say, I’m more than happy to live a life as a scientist and accept all that comes with it, but it feels wrong to knowingly make someone’s life worse just so that they can have less ignorance. It just doesn’t seem like our choice to make (whereas I would have no qualms at all about going out of my way to help make a stranger’s life better).
I think this has implications for how we enter into debates or arguments–we should always be striving to make the person we debate with better off. Though I still struggle to apply this in my own life, it seems clear to me that we shouldn’t be arguing to boost our ego or make ourselves feel smart. Rather, we should aim to sincerely help and better our partner. In this case, I think some of Chris’s arguments against a subset of New Atheists hold: it’s not hard to find blogs posts or submissions to r/ atheism that seem to aim primarily to degrade believers, rather than address them with their well-being in mind.
Epistemic concerns are obviously important, but, to me at least, they seem necessarily grounded in ethics. That Chris and I might put moral concerns prior to epistemic concerns isn’t a bug that displays a disdain for knowledge, but rather a feature that properly grounds knowledge in human well-being. And I don’t see anything wrong with that.
Vlad Chituc is a lab manager and research assistant in a social neuroscience lab at Duke University. As an undergraduate at Yale, he was the president of the campus branch of the Secular Student Alliance, where he tried to be smarter about religion and drink PBR, only occasionally at the same time. He cares about morality and thinks philosophy is important. He is also someone that you can follow on twitter.