A Philosophers’ Blog Carnival!

This month, the New York Times featured numerous philosophers’ opinions on the “experimental philosophy” movement which has garnered increasing attention in the last couple of years.  In response,  at Thoughts Arguments and Rants, Brian Weatherson tries to discern what precisely makes “experimental philosophy” any different than the many other scientifically aware philosophical approaches that have existed since long before this current movement began:

I think there are three trends here that are worth noting.

One purely stylistic, and actually rather trivial, trend is that philosophers are now a bit more inclined to ‘show their workings’. So if I want to rely on Daniel Gilbert’s work on comprehension and belief, I’ll throw in a bunch of citations to his work, and to the secondary literature on it, in part to give people the impression that I know what’s going on here. You won’t see those kind of notes in, say, J. L. Austin’s work. But that’s not because Austin didn’t know much psychology. I suspect he knew much much more than me. But because of very different traditions about citation, and because of differences in self-confidence between Austin and me, his philosophy might look a bit further removed from empirical work.

A more interesting trend is picked up by Ernie Sosa – philosophers are doing a lot more experiments themselves than they were a generation ago. This is presumably a good thing, at least as long as they are good experiments!

The university that Ernie and I work at, Rutgers, has a significant causal role in this. We encourage PhD students to study in the cognitive science department while they are at Rutgers, and many of them end up working in or around experimental work. That’s not to say I’m at all responsible for this – I’m much more sedentary than my median colleague. But many of my colleagues have done a lot to encourage students interested in experimental work.

The third trend, and this one I’m less excited about, is the reliance on survey work in empirical work designed to have philosophical consequences. It seems to me that surveying people about what they think about hard philosophical questions is not a great guide to what is true, and isn’t even necessarily a good guide to what they think. We certainly wouldn’t take surveys about whether people think it should be legal for an Islamic community center to be built around the corner from hereto be significant to political theory debates about freedom of religion.

A slightly more interesting result comes from a survey that Matthew Yglesias posted this morning. If you trust Gallup, only 26% of Americans believe in “the power of the mind to know the past and predict the future”. This is a more than a little nuts, at least as interpreted literally. I know that I had blueberries with breakfast, and I can confidently and reliably predict that the Greens will not win the Australian election currently underway. And I know these things in virtue of having a mind, and in virtue of how my mind works. There’s the power of the mind to know the past and predict the future in action!

Of course, the 74% of people who apparently denied that the mind has the power to know the past and predict the future probably don’t really deny that I have these powers. The survey they were asking was about paranormal phenomena generally. And I left off part of the question they were asked. It asked whether they believed in clairvoyance, which they ‘clarified’ as the power of the mind to know the past and predict the future. Presumably at least some of the people who answered ‘no’ (or ‘don’t know’) interpreted the question as not being about the power of the mind to know stuff through perception, memory and inference, but through some more extraordinary method.

It’s in general extremely hard to understand just what qustion people are answering in surveys. And this makes it hard to know how much significance we should place on different surveys.

The interesting discussion in the comments section to Weatherson’s post goes on for quite a while and is worth a look too.  Go Grue, the University of Michigan philosophy graduate students’ unofficial group blog, took issue with Weatherson’s suspicions about the uses and abuses of surveys:

I take it the thrust of the anti-survey refrain is in fact not about the survey method at all, but about the manipulations and interpretations of some experimental philosophy studies. Fair enough. But not all experimental philosophy studies use the same experimental manipulations and the same approach to interpreting the data. So, it seems to me, specific criticisms of manipulations or approaches to interpretation would be much more helpful than broad ones. At the very least, broad criticisms should identify what the problematic manipulations or approaches to interpretation have in common, and which studies fall prey to these problems. It looks like the experimental philosophy community has done a decent job keeping itself in check on those counts (e.g., this and that) even though progress certainly comes incrementally and slowly.

Moreover, once clarified, the thrust of the anti-survey refrain impacts philosophy’s interactions with empirical disciplines generally, and not just experimental philosophy. Perhaps the stake-manipulation study-designs do not adequately address epistemologists’ concerns and the straightforward inference from folk responses to philosophical conclusion is overly hasty. Even so, these are simply the kind of problems that frequently arise with interactions between philosophy and psychology (and likely other empirical disciplines too). Psychologists designing the experiments could easily, and perhaps even more likely given their lack of conceptual familiarity, miss philosophers’ concerns too. They might not know what the appropriate questions to ask either. Similarly, philosophers could misinterpret psychological findings. Additionally, in my experience, psychologists not infrequently misinterpret their own findings by making stronger conclusions than the data warrants; it would be bad, too, if philosophers were to draw philosophical implications by simply taking psychologists’ words at face value. This is not to say that philosophy ought not interact with empirical disciplines, but just that it’s hard generally. Consequently, I find the singling-out of experimental philosophy in this line of attack puzzling.

At Common Sense Atheism, Luke Muelhauser interviews metaphysician Eric Steinhart who develops the Neo-Platonic and possible worlds dimensions of Richard Dawkins’s philosophy, explains the ways in which arguments frequently offered in defense of the existence of the Christian God actually can be used to disprove that God, and lastly argues that atheists should explore non-theistic metaphysical possibilities of an afterlife and gives an indication of what such an atheistic afterlife might be.

James Gray critically reviews Torbjörn Tännsjö’s book Moral Realism at some length, summing up his response to the book briefly by writing:

I agree with Tännsjö that we observe intrinsic values, but I think his argument is slightly lacking. One, he argues that we experience pleasure as good pain as bad, but it isn’t clear from his argument that we observe pleasure or pain as intrinsically good or bad. Two, Tännsjö seems to think that only pleasure and pain have intrinsic value, but it is a fairly common and uncontroversial assumption that human and animal existence also has intrinsic value.

Gray’s blog is called Ethical Realism.

Thom Brooks gives this enticing abstract for his article “Retributivism and Capital Punishment” (from the forthcoming Oxford publication Retributivism: Essays on Theory and Practice edited by Mark D. White) in which he opposes capital punishment on retributivist grounds:

Should retributivists reject capital punishment? It is easy to see how those holding different theories of punishment might oppose it. For example, a deterrence proponent could argue that capital punishment lacks a deterrent effect and, thus, it is unjustified. This seems a far more difficult task for a retributivist.

I will argue that retributivists should reject capital punishment for murderers. My argument will accept several concessions. First, I accept that capital punishment may be proportionate to the crime of murder. Thus, my claim is not that capital punishment should be rejected because it is disproportionate to murder. Secondly, I accept that capital punishment need not be cruel nor unusual punishment. This is an area of wide disagreement, but I do not wish to be distracted by these debates here. Note that I am not defending any particular method of execution. I simply assume that a method may be satisfactory. Thirdly, I also accept that capital punishment is not barbaric nor uncivilized. Some philosophers, such as Kant, rejected punishments for some crimes on the grounds that doing so might itself be a crime against humanity. This also an area of wide disagreement I wish to avoid. In summary, these three concessions are accepted up front purely for the sake of argument. My claim is that retributivists should reject capital punishments for murderers even if they believed it proportionate for murderers, it was not cruel nor unusual to impose capital punishment on murderers, and capital punishment was not barbaric nor uncivilized.

Speaking of retributivism, Terrance Tomkow is working out a “retributive” theory of property:

According to Retributive Ethics, moral rights are merely warrants for violence. All rights are Retributive rights: rights to harm other people. So when a Retributivist hears a right asserted he asks only who is claiming a moral permission to hurt whom, by how much and when.

In the case of property rights the question has clear answers: property rights at least include rights to defend your property: that is, to do harm to those who attempt to deprive you of its use and the right to forcibly interfere with others if they should attempt to use it without your permission. You may interfere with them by building fences or locking doors or by putting your body in their path. Should push come to shove, you may push and shove or, if society recognizes your property claim, you may call on the cops, who will do the pushing and shoving and, if need be, shooting on your behalf.

You are in short, entitled to enforce your property rights. “Force” being the operative word.

Now, academics are apt to grow faint at the first mention of violence so let me quickly offer some palliating remarks for tenured readers. Note first of all that acknowledging a right to defend property does not require you to think property rights warrant absolutely any degree of violence in the defense of absolutely any property. You may not think it permissible to shoot a burglar if you do not have much worth stealing or even if you do. You may yourself be so passive or pacifist that you would not lift a finger to protect your belongings. You may be so empathetic to the plight of those driven by circumstance to theft that you wouldn’t even call the cops. Never mind. What matters is that you agree that having property makes it sometimes morally permissible (however personally repugnant) to forcibly interfere with others in some ways that would not be permissible if you had none. To deny this– to say that it is always morally impermissible to lock a door, build a fence, or lift a finger (even to dial ’911′)… to defend one’s property seems to me to simply deny that there is any moral right to property.

But, we might ask, of what value is property itself that such moral claims can have any importance in the first place?  David Michael at Perplexicon explores what sorts of evolutionary purposes ownership might have served and how these might both explain and delimit our sense of the value of private property:

Perhaps it actually came about in the days in which we were still struggling against the elements, when we first invented tools which we thought should never fall into the hands of rival tribes. These tools would have been effectively owned by the whole tribe, but once we had mastered our environments the instinct remained, and since we had no need for it in its group from, we turned it in on ourselves. However it developed, we can be reasonably certain that it can only develop once the species is relatively comfortable, and once it is reasonably intelligent. Once it has established itself, its uses become apparent. We want to own things because we perceive there to be some utility or value in them. We might be wrong, but we are right enough of the time that the cumulative effect on our species is a good one. We make tools that make agriculture possible and then more efficient, that make hunting relatively risk-free, that allow us to build houses—I need not bore the reader with the full list. All of this is possible without ownership, but ownership gives us the incentive to employ our fullest efforts in this regard. It might be argued that the inevitable psychological effect of all this is that we place too high a premium on ownership, and only rarely does it have real value, which means that we have an inflated anxiety about possessions. But the uses of ownership clearly outweigh such psychological drawbacks as there may be.

Be this as it may, it does not mean that the ownership instinct is unambiguously good, in the way that the (indiscriminate) killing instinct is unambiguously bad. Given that we are taking survival as the ultimate bedrock of any morality, there are clearly considerations that can supersede the ownership instinct. If you are rich and a poor man steals from you out of necessity, is it right to react with outrage? Is it possible to say that the thief was morally wrong, when survival is the most important thing of all? In a milder case, what if you can find a considerably better use for something than its current owner—would it be wrong to steal then? If you are correct in your assessment, then stealing is better for society than not stealing. There are other considerations, too. If the “institution” of ownership is generally a good thing for society, then there might be times when non-ownership is better for society. If, for instance, a reclusive billionaire buys Picasso’s Guernica only to hide it in a dark lair for his amusement alone, we have a case where the needs of culture at large are being set aside for the sake of private ownership, where non-ownership might be better overall.

We can conclude with some certainty, then, that the ownership instinct is good for society so far and only so far as it serves society.

At Brains, Eric Thomson explores two possible criticisms of David Chalmers’s “hard problem”.  The second line of attack develops an unflattering parallel between Chalmers and the now discredited vitalist tradition:

How do facts about brains relate to facts about conscious experiences? Our understanding of both sets of facts is so undeveloped that Chalmers’ confidence seems premature.

By analogy, many people don’t understand how facts about energy relate to facts about mass, they can’t conceive of any possible logical route from one to the other. Most people’s understanding of energy is about as clear as our present characterization of phenomenal facts, so this seems an apt analogy. While facts about energy don’t supervene on facts about mass, that shouldn’t change the conceptual point.

The analogy people usually bring up against Chalmers is vitalism. Vitalists couldn’t conceive of how physico-chemical facts related to certain biological facts, and used this to infer that the physico-chemical picture of life was incomplete. The vitalist Driesch stated his argument strategy quite nicely when he claimed (in ‘Science and Philosophy of the Organism’ (1908), p105):

[S]omething new and elemental must always be introduced whenever what is known of other elemental facts is proved to be unable to explain the facts in a new field of investigation.

Driesch’s argument for vitalism was an application of that general inference rule. For instance, he argues (ibid, 142):

No kind of causality based upon the constellations of single physical and chemical acts can account for organic individual development; this development is not to be explained by any hypothesis about configuration of physical and chemical agents. Therefore there must be something else which is to be regarded as the sufficient reason of individual form-production…

This is a good example of a conceivability argument hitting the rocks.

Chalmers would likely argue that the analogy fails because in Dreisch’s argument the target facts were “easy” facts about development, not “hard” facts about consciousness. This would be to miss the point that we need to exercise extreme caution when consulting our conceptual intuitions about what follows from what. That we now believe one group of facts is easy to reach from the other is clearly a contingent fact.What guarantee can Chalmers offer that he isn’t falling victim to a similar failure of imagination, lured by his contingent limited understanding of facts about brains and facts about consciousness?

Wondering what Chalmers himself has to say about this comparison?  You can read for yourself by clicking over to the post and reading the comments section, in which Chalmers makes an appearance.  And while at Brains, you might also be interested in Gualtiero Puccinni’s argument that “cognitive science as it was originally conceived is being progressively replaced by cognitive neuroscience”.

Turning from the mind to math, Glowing Face Man offers a discussion of  “Nonstandard Worlds” at Xamuel.com, summarizing the piece as follows:

What does it mean to say (a hypothetical) mankind descends from Adam and Eve (or, more generally, finitely many matriarchs and patriarchs)? Does it mean that every ascending chain of ancestors reaches Adam or Eve? Or does it simply mean that everyone is a descendant of Adam or Eve? These two possibilities seem deceptively similar, but it turns out they’re not the same. I show that the the latter interpretation allows for some very bizarre family trees of mankind!

A blogger calling himself only James at a blog called Moral Fideism argues that thinking dichotomously about science and poetry is a mistake:

insofar as the object of science is matters of fact, and truth in that discipline the correct representation of its object via thought, the field must be understood to have wider scope, including the lyrical arts. This is because if veracity in poetry is the accurate portrayal of feeling, and the means of its pursuit likewise the vessel of our intellect, it seems superfluous to need another set of a different name having as its members our emotions and cosmic reality. Rather, it is more parsimonious to have the latter as the genus and the former as a species of that family, the correct illustration of which may be termed the poetic science.

all associations might be passional–even those that seem rational. And if reasonsimpliciter is deprived of inference as its own possession, it cannot subsist independently without it. Further, if the antecedent of the previous is satisfied, it follows that if thought is an effect of our reason, and every effect may be affixed with a predicate belonging to the cause if only to refine the nature of its origin, then thought may be deemed passional as well in at least one sense. Finally, if the last obtains, then poetry and the hard sciences are of the same clan not only because their thoughts bear true representations of reality, but also because the source of those thoughts are our pathos–and to place diverse objects of the first under equally variant headings is uneconomical.

Kenny Pearce explores “the dialectical appropriateness of ontological arguments” in his summation and analysis of J.H. Sobel’s criticisms of Anselm.  Pearce sums up his article thusly:

J. H. Sobel claims that ontological arguments are somehow dialectically inappropriate. I argue that, while they are subject to some limitations (they require premises that can be consistently denied) they do not necessarily commit serious dialectical errors, such as begging the question.

Maryanne Spikes provides a general outline of her views on morality and faith at The Sword and the Sacrifice Philosophy in her post “Why Ethics?” The section I found most interesting was her attempt to distance Kierkegaard from the charge of fideism:

Those who feel that we should just have faith may wonder why the Moral Truth Litmus, or a rational examination of the theories in ethics, is even necessary (see Objection 5 in Appendix E). They may feel that it shows how strong our faith is, when we do not question anything, and that it shows how weak our faith is, whenever we do question. They may feel we should just trust in the revelation of the Bible; that this paper goes through a lot of trouble to explain why the Golden Rule is the only viable theory of moral truth, when all we needed to do was search the Scriptures, and live out the great principle in our lives. Soren Kierkegaard, the great Christian philosopher and father of existentialism, is often misunderstood as being such a fideist (42). However, fideists like Kierkegaard would say that “Subjectivity is Truth”—that having objective evidence of the real ought (God, described by the Golden Rule) is a mere shadow of actually living it out (see Objection 6 in Appendix E)—faith “that” God exists is not enough—we must live faith “in” God. Love is about subjective faith, not objective certainty (see Objection 5 in Appendix E if the lack of certainty sounds heretical to you), as John Nash discovered in the marriage proposal scene of the movie “A Beautiful Mind”. There is truth in making truth personal, in going beyond mere intellectual assent and putting our faith in God, but before we can do that we are encouraged in the Bible to “reason together” (Isaiah 1:18) with God, to “examine everything” (1 Thess. 5:21) and to “give a reason” (1 Peter 3:15). If “faith” is the ultimate goal, then the object of our faith could very well be anything, as long as “faith” is accomplished. But we know this is not true. Kierkegaard’s point, and a major point of this paper, is that, though we need reasons for faith, we need more than that—we need to actually live out faith (but see Objection 24 in Appendix E). That is where we will find true satisfaction. However, when a person exalts blind faith, they think they are exalting the sort of faith that puts trust in a person, when actually they are insulting the person by saying there is no evidence that they are trustworthy.

On the subject of faith, I have devoted many posts to disambiguating numerous concepts all typically and confusingly referred to by the word “faith”.   While I appreciate Spikes’s interpretation of Kierkegaard which would spare him from the charge of supporting fideism, I find the notion of rational faith to be a linguistic confusion.  I argue that all “belief” is too easily and too often lumped in with faith beliefs when faith beliefs are only a limited subset of beliefs in general.  While many philosophers are quite accustomed to thinking of knowledge as a species of belief (typically identifying it as something along the lines of a justified and true belief that has sufficient defeaters for any and all Gettier problems), too much popular use of the word belief opposes belief and knowledge as opposites.  And, worse, much popular thinking and debate about religion crudely treats the slightest degree of uncertainty with a fundamental failure to have objective knowledge or to be able to make claims of epistemic confidence seemingly at all.

Against such trends, this summer I have argued the following: that faith is a subset of belief not to be confused with belief itself, that faith is properly defined as belief either despite insufficient evidence or even despite outright counter-indications of the evidence, and that many who identify as agnostics because they think that knowledge about the existence of God is in principle impossible or that the evidence for and against a sophisticated philosopher’s concept of God are essentially just as much opponents of faith and all the gods of religious imagination as any atheists and are therefore in fact a kind of atheist and not merely agnostics simpliciter. And on this point generally, I have argued quite a bit that we need to stop simplistically presenting theism, atheism, and agnosticism as mutually exclusive or exhaustive categories but start consistently distinguishing beliefs about whether or not there is a god or gods (which are questions of theism, atheism, or abstention from belief in theism because of lack of evidence which serves as a metaphysically cautious default atheism) from beliefs about whether or not one has actual knowledge about whether or not there is a God, which are questions of gnosticism or agnosticism and which are strictly speaking independent of one’s affirmation or denial of some or all deities.

One can be an agnostic atheist if one thinks there is insufficient evidence for God’s existence and so defaults to non-belief or a gnostic atheist if one thinks that, while not necessarily incontrovertibly certain, there is nonetheless sufficient evidence against the belief in a particular god that one can say one knows that that god does not exist.

And similarly, there can be gnostic theists who not only believe in a god or gods but who claim enough epistemic justification to say they know there is a particular god or set of gods.  And, on the other hand, there can be agnostic theists who admit to not knowing (or, even, not being able to know) whether there is a god or gods but who opt to believe despite insufficient evidence (or against it) and thus are possessors of faith, as I narrowly define it.

And, most importantly at all, the equivocation of the word God to cover both sophisticated philosophical conceptions and the superstitious personal deities around which concrete religions have traditionally been built has led to many people who are clearly atheists about personal, interventionist deities calling themselves agnostics or deists or pantheists but not atheists when, in a significant and culturally and institutionally decisive way they really are.  So, again, I argue for greater clarity about what kind of agnostic or deist or atheist one is.  With respect to personal religious deities, one can be a gnostic atheist who thinks he has outright knowledge that they do not exist, while simultaneously being a philosophical deist who thinks some formulation of an impersonal divine first principle is metaphysically likely.

But in addition to all these discussions of faith and reason, the work I’ve done on Camels With Hammers of late which I am most enthusiastic about is my presentation of various aspects of my moral philosophy.  In particular there are my accounts of the nature and value of the virtues of pride and humility (and the ultimate harmony between them), my defense of teleology as a relevant contemporary category for explanation, and my arguments about how our morality realizes our humanity.

Finally, for more on the existence or non-existence of God, at M and M you can listen to a debate between University of Auckland Emeritus Professor of Philosophy Dr Raymond Bradley and Dr Matthew Flannagan on the question “Is God the Source of Morality?  Is it rational to ground right and wrong in commands issued by God?”

That concludes this edition. Submit your blog article to the next edition of the philosophers’ blog  carnival using our carnival submission form.  Past posts and future hosts can be found on the blog carnival index page.

Your Thoughts?

About Daniel Fincke

Dr. Daniel Fincke  has his PhD in philosophy from Fordham University and spent 11 years teaching in college classrooms. He wrote his dissertation on Ethics and the philosophy of Friedrich Nietzsche. On Camels With Hammers, the careful philosophy blog he writes for a popular audience, Dan argues for atheism and develops a humanistic ethical theory he calls “Empowerment Ethics”. Dan also teaches affordable, non-matriculated, video-conferencing philosophy classes on ethics, Nietzsche, historical philosophy, and philosophy for atheists that anyone around the world can sign up for. (You can learn more about Dan’s online classes here.) Dan is an APPA  (American Philosophical Practitioners Association) certified philosophical counselor who offers philosophical advice services to help people work through the philosophical aspects of their practical problems or to work out their views on philosophical issues. (You can read examples of Dan’s advice here.) Through his blogging, his online teaching, and his philosophical advice services each, Dan specializes in helping people who have recently left a religious tradition work out their constructive answers to questions of ethics, metaphysics, the meaning of life, etc. as part of their process of radical worldview change.

  • http://www.examiner.com/apologetics-in-san-francisco/faith-101-do-we-need-less-faith-or-is-faith-strengthened-as-evidence-increases Maryann Spikes

    Thankyou, Daniel :)

    All have faith (subjective certainty, to varying degrees below absolute certainty). Here’s why:

    1. There will never come a time when finite beings no longer need faith and attain absolute certainty. Faith will always be required in varying degrees, because certainty will only ever be had in varying degrees. Although objective certainty (truth, correspondence to reality) is required in order for knowing to count ‘as’ knowing, we must always leave our beliefs open to future revision, knowing that in the past we found out we were wrong (though, perhaps sometimes we were right after all?) about something we thought we knew. Sometimes we can feel very close to “absolutely certain” but we must leave room for revision, and therefore, faith.

    2. Faith that is strengthened despite counter-evidence is blind faith, bad faith, as discussed in Reasons for faith: Is faith blind? It is a mistake to assume that all faith is blind, bad faith, because faith does not just operate in religious belief. That isn’t to say that all ‘religious’ faith is necessarily blind, bad faith, either—I am merely addressing the common prejudice. There are many ways that faith can be strengthened and (as neuro mentioned) backfire in response to counter-evidence (google for reactance, forewarning, selective avoidance, refutational defense condition, biased assimilation, attitude polarization, cognitive dissonance, trivialization, less-leads-to-more effect, etc.) and being aware of these ways, we can prevent them from controlling our attitudes and behavior, including our “subjective certainty”.

    3. Because it ‘is’ faith, subjective certainty (excluding absolute certainty, which is unattainable by finite minds) that is strengthened despite counter-evidence is bad and blind, just like bad and blind faith. Bad faith ‘is’ blind subjective certainty, and blind subjective certainty ‘is’ bad faith. Any level of subjective certainty below absolute certainty ‘is’ faith. The more subjective certainty is strengthened, the less the weaker levels of subjective certainty are needed, just like faith is only needed when absolute certainty is lacking. The stronger the level of good subjective certainty, the stronger the good faith—because they are the same and are strengthened with strong evidence and weakened by counter-evidence.

    ***

    In conclusion, I stand by my assertion that we must have faith, or subjective certainty (though never ‘absolutely’ certain), in the strongest evidence, and that both certainty (though not ‘objective’ certainty) and faith are stronger when the evidence is stronger, when the answer/theory is more strongly justified, whether or not we are talking about religious faith/certainty. I leave you with this: Believing—faith, for all us finite folk—is a virtue when new evidence is battling against an old worldview over territory in the mind—but only when the truth wins the battle. Sometimes the worldview is true, sometimes the worldview is false. Sometimes the evidence is true, sometimes the evidence is false. Faith is a virtue (it is good faith) when you commit yourself to believing only what is true, despite the sometimes overwhelming “pull” of the familiar worldview or trending new evidence. All else is bad faith.

  • http://www.examiner.com/apologetics-in-san-francisco/faith-101-do-we-need-less-faith-or-is-faith-strengthened-as-evidence-increases Maryann Spikes

    Edit: I leave you with this: Sometimes a worldview or evidence is true, sometimes false. Believing—faith, for all us finite folk—is a virtue when new evidence is battling against an old worldview over territory in the mind—but only when we commit ourselves to believing only what is true (even if we don’t at first get it right), despite the sometimes overwhelming “pull” of the familiar worldview or trending new evidence. All else is bad faith.

  • DSLR-A850

    דברים גדולים ממך, בנאדם. אייב לקרוא את הדברים שלך לפני youre פשוט מדהים מדי. אני אוהב את מה youve יש כאן אהבה, youre מה לומר ואת הדרך בה אתה אומר את זה. אתה עושה את זה משעשע אותך עדיין מצליחים לשמור את זה חכם. אני לא יכולה לחכות כדי לקרוא יותר ממך. זה באמת בלוג נהדר.


CLOSE | X

HIDE | X