Bart Ehrman and the Quest of the Historical Jesus of Nazareth

Tom Verenna drew a new book to my attention, Bart Ehrman and the Quest of the Historical Jesus of Nazareth. He has now written a very negative review of the volume.

Although Richard Carrier is a contributor to the volume, and says that Tom’s review is too scathing, Carrier’s own review is not much less so. What he writes about the contributions to the volume by people like Rene Salm and Earl Doherty is absolutely priceless!

On the subject of mythicism, see also part two of the review of Carrier’s book on the blog Diglotting.

Of related interest, Kris Komarnitsky has an article in The Bible and Interpretation which tackles Sherwin-White’s “two-generation rule” regarding the development of myth. The fact that Plato was identified as having a divine parent within his own lifetime is more than enough to demonstrate that figures can be mythologized quickly and not just after a long period of time. But as Komarnitsky points out, Sherwin-White’s rule has been misconstrued, as it was about the erasure of history and not the appearance of mythologizing elements. Yet even on this point there is counter-evidence to Sherwin-White’s proposal, as well as reason to think that the case of Jesus of Nazareth may not have had the same constraints present to counterbalance the tendency towards mythologization, which one finds present in other instances. Definitely worth a read by those interested in ancient history in general, and the historical figure of Jesus in particular!

Also of related interest, a Portuguese open access journal, Revista Jesus Histórico e sua Recepção, was mentioned on AWOL.

 

  • http://unsettledchristianity.com/ Joel

    an older article by McCasland suggests it takes about five years

    • Matt Brown

      You seem to have a misunderstanding of what Sherwin-White means. Sherwin-White is not saying that it takes two generations for legend and myth to develop. He’s saying that it takes two generations for myth and legend to completely remove the core historical facts of a story, event, or person.

      The gospel accounts were written on average within less than one generation.

  • Tom Verenna

    Kurt Noll’s contribution to ‘Is This Not the Carpenter’ takes the opposite position that Komarnitsky (and Lee Strobel, who was a rather odd choice for him to choose to defend on B&I) takes and I believe uses better methodology–but that is my opinion.

    If you don’t mind a gentle nudge, Carrier’s new book has nothing to do with mythicism; it is a book on quantitative methods for the historian and nothing more.

    • http://www.facebook.com/dan.ortiz.54 Dan Ortiz

      Quantitative methods? Is not his Bayes’ theorem approach again is it?

      • Tom Verenna

        Yes, and in my opinion he demonstrates its usefulness. BT is used in archaeology and in other fields of history already. It is an accepted part of historical research and all Carrier does is demonstrate (successfully in my opinion) it’s value in BS.

        • http://www.facebook.com/dan.ortiz.54 Dan Ortiz

          “successfully in my opinion” Of course you do. BS in deed….

    • Ian

      I thought Carrier’s throwaway line about the use of Bayes’s Theorem in Textual Criticism was interesting, and made me wonder. I’ve seen Bayes’s Theorem marshalled to bolster a range of different positions: from Carriers general skepticism to WLC’s overwhelming likelihood of the resurrection. But one feature is that I’ve not yet seen any BT-derived conclusion that was accepted by other BT-advocates.

      • Tom Verenna

        All I’m saying is read his arguments in his book before knocking it. It is easy to throw stones when your target is at a distance.

        • Ian

          Sorry, my response implied I was being negative towards you. I do think Carrier is on very dodgy ground and his ‘quantities’ hold no water. But that isn’t why I wrote that, I was just noting that the only time he mentioned that quantitative approach was in casting doubt on another’s use of it.

          I read your paper on Ehrman’s book for the first time also, and enjoyed it. Bits I disagreed with, and lots I have nowhere near the expertise to assess, but the rest very much meshed with my opinion of the book. Despite being convinced on the historical argument, I thought the book was really disappointingly weak.

          • Tom Verenna

            Thanks for this reply. I appreciate your feedback on my paper. I suspect that not everyone will share my views, but glad to hear that you enjoyed it nonetheless. That, for me, implies that I wrote a good paper. =)

      • arcseconds

        given that Bayes’s theorem says nothing about what prior probabilities to accept (or, for that matter, likelihoods), isn’t that exactly what we’d expect?

        • Ian

          I wrote a series of math explanations on this on my blog in response to Carrier’s book. The bottom line being that BT is poorly conditioned for the kinds of inputs that these folks are putting in, so the results of running BT on a set of imprecise input gives you a conclusion that is *vastly* more imprecise than your assumptions. In the worst case, there is no information in the output. But the usual implication is that BT is doing something useful: giving you a conclusion that is built out of your assumptions, when often the reality is that BT is eating all the information from your inputs and giving out noise.

          • the_Siliconopolitan

            But what is the alternative to BT? (Sorry, I should go back and read your posts.)

            Isn’t BT just common sense formalised? If BT produces nonsense, then how can the ‘ordinary’ arguments of historical research do any better? How are they exempt from the propagation of error than BT suffers?

            • Ian

              BT is the formalization of one particular kind of reasoning: the inverting of a conditional probability. That is an important kind of deductive reasoning step. But only one of many. There are many ways to formalize reasoning, and we can give probabilistic formulae for them. Some of those formulae could be directly rewritten as a form of BT, others don’t map nicely onto it (e.g. A and B -> A). If we make certain assumptions about probability, we could write anything in terms of BT, but that’s not a very deep result. We can write any number in terms of a fraction too, it isn’t helpful to do so unless it tells you something about its structure.

              When someone weighs up lots of evidence, of lots of different kinds, and brings their experience to bear, they are doing kinds of reasoning that don’t map simply to BT. Sure, if we could atomize the steps and rewrite the logical process, we might be able to phrase them in terms of BT, but we’d end up with hundreds or thousands of variables, with all kinds of non-independencies.

              When someone tries to give a BT explanation of a single step, they are pretending that that step is done in isolation for any other evidence, and that other evidence is non-correlated.

              We’ve been through this before in the 80s-90s with Bayes based expert systems. It works, but it turns out the BT bit is *by far* the least interesting bit of the process. For example, in the expert system I use for clients, three lines of a 57,000 line code base (for the core reasoning system) is taken up doing BT! The hard work of dealing with vast data sets its getting the data right, clean, errors decoupled, and bringing together the vast numbers of calculations to give a final answer, that’s the hard bit, and it is so hard, that we often don’t work through BT calculations all the way: we hand off to other things like order-statistics, and fuzzy membership values.

              Historians do that at a gestalt level, as do doctors and good stock traders (all professions where judgement is crucial and results are always laden with caveats).

              To use BT in the Carrier/William Lane Craig way, we take all that information and expertise, do the hard work of historical analysis on slightly the wrong problem, reduce it to a number, then feed it through BT to get another number which is a more error prone answer to the problem we wanted in the first place. So what? Why not just estimate the probability you’re looking for?

              BT buys you just about nothing, except, by trying to force historians not to accumulate all the data, it makes them less able to do their job. This is the same reason why Doctors don’t do a bunch of tests, then pick one and do BT to get a diagnosis.

              I’ve really seen BT used in history except as an apologetic device. It seems like you’re doing math (TM), and that should be impressive and rigorous. You are doing math, but it is not rigorous. Using BT on data sets of historical scale, would be possible, but would be a vast computational problem, not half a dozen calculations.

              One caveat: it is possible in some cases to use BT to show illogical reasoning. When someone claims something that doesn’t follow. But if you want to learn some math to help cut down your reasoning, then something like the predicate calculus is going to be more useful in more situations than BT.

              • Ian

                ..to use BT as an apologetic device, you start with an estimate of what the final probability should be. Then you phrase the question in the right way that you can choose a reference class that will give you the right answer. And hey-presto, your original estimate is now more believable, because you derived it through BT, based on a single class of evidence.

                • arcseconds

                  I’ve got a degree of weak agreement with Carrier (or maybe it’s more of a hope) that making the reasoning steps explicit in a formal or semi-formal treatment might allow greater transparency of the reasoning process, and at least for example historians and others see exactly where they differ.

                  E.g. perhaps the fact that I’m giving a little more weight to the reliability of oral history than you do, plus a slightly greater probability for earlier authorship of the Gospels, plus a slightly higher probability that independent testimony is preserved in the apocrypha, might all add up to give my much larger credence in a substantial picture of the historical Jesus than you have.

                  Another interesting possibility with formalized reasoning is the old ‘one person’s modus ponens is another’s modus tollens’ — if you don’t like the conclusion in a deductive argument, you can always reject a premise. Similarly in a Bayesian treatment if you don’t like the final probability you can always go around adjusting likelihood or whatever.

                  • Ian

                    I agree, putting forward your axioms, your method, your lines of evidence and the weight you assign each of them is crucially important. But I think that good historians do that. Take something like Crossan’s The Historical Jesus (a book I disagree with, but I think is great). He lays out in quite specific detail his method, his sources, how he evaluates them, and then follows these to his conclusion.

                    I can see immediately where I disagree with him. Would it help if he estimated numbers for everything and did some BT calculations? No, I don’t think so. Not even for someone used to looking at probability distributions like me.

                    The fact that some historians are less clear in their argument doesn’t mean that BT is some foolproof way of whipping them into shape, I don’t think. In fact, the opposite, since it is so easy to blind with science, I think a lot of historians would find it more difficult to spot the rabbit going into the hat if it was all couched in numbers. Particularly if the conclusion were relying on the BT errors.

                    And in my uncharitable moments, I think that’s why Carrier and Craig do it. The debate between Craig and Ehrman, for example: it is clear Craig is doing the BT math because he rightly surmises Ehrman won’t be able to counter it, or maybe even follow it. Craig is interested in winning the debate, so he certainly doesn’t want to engage with Ehrman on topics that Ehrman knows a lot about. It is easier to try to set up a new beach-head where Craig knows more, and present that as being the ‘right’ way to address the question.

                    It seems to me to be more of a pseudoscientific posturing than an actual commitment to careful reasoning.

                    • arcseconds

                      I’m certainly not arguing that historical reasoning is in a bad shape and mired in poorly structured arguments that are little more than worked-up prejudices and Bayes to the rescue! or anything like that. I agree that historical reasoning can often be quite fine, and where it is not so fine Bayesian formalizations are unlikely to help much.

                      So sure, it can be obvious where you disagree with someone.

                      But your own treatment shows us that it might be very unobvious where you disagree with someone. You and I might agree quite closely on the priors, but have very different ideas about the posteriors, yet both be using the same Bayesian reasoning (correctly).

                      So one could argue that your analysis shows, rather than that we should just forget about formulalizing our arguments into probabilistic formulae because it will never work, we now have an additional reason for doing so: by including appropriate error-analysis, we can show what kind of range of posterior probabilities is reasonable given a certain (possibly small) range of priors.

                      I realise of course that that’s not going to be easy, and the difficulties may mean it’s not worth it, and formalization certainly shouldn’t be considered a wholesale replacement for ordinary, non-formal reasoning, but none of that necessarily means that the approach shouldn’t be pursued.

                      — I’m not really hugely enthusiastic about the possibilities here, but I thought I’d make the argument anyway.

                      At any rate, the very fact that priors all in the ‘pretty much agree about’ range can support posteriors that depart from each other is still a very interesting result that ought to inform our non-formal discussions. It means that agreement about ‘the facts’ and their appropriate treatment and disagreement about what can be inferred from them is a very real, and totally rational, possibility.

                    • Ian

                      “we now have an additional reason for doing so” – Ideally, I guess, but I’m not sure I can imagine how that could be done without unduly obfuscating the argument with math. It would be good to have a good example to point to, I think. Perhaps Carrier’s second book will be that. I’m not optimistic, but I’m happy to see it done well.

                    • arcseconds

                      Someone just has to build a decent user interface for it :]

              • arcseconds

                Carrier seems to have completely misunderstood you here, by the way.

              • the_Siliconopolitan

                I get your argument. I think.

                Historians do that at a gestalt level, as do doctors and good stock traders (all professions where judgement is crucial and results are always laden with caveats).

                But this really isn’t good enough. The gestalt gets us boom and bust on the stocks (but, yes, faulty maths got us the mess we’re in now), and doctors going by their gut feelings kill people regularly. Presumably history is as laden with errors (if much less consequential). It would be nice with a toolset to work out what is more likely the best investment, diagnosis or reconstruction of the past.

                Your analogy about clean data and decoupled errors sounds like something that is greatly lacking in history – and many other disciplines.

                • Ian

                  Traders make poor decisions, yes. But automated trading systems (the one of the three examples I’m most familiar with) also cause instabilities. And crucially, you need to feed them with human-generated models and human-curated knowledge. There are times when human performance is data-limited, and an expert system can help. But often the performance is model limited, and a formalization of the model doesn’t help.

                  Gestalt may not be ideal, but it is really hard to do better. You ‘just know’ common sense stuff all the time that is incredibly difficult to build mathematical models of. And even if possible, the models are really difficult to meta-reason about.

                  It would be nice to have computational systems that prevent us from reasoning poorly, but its a way off still!

                  Which reinforces my main point, that a single back-of-the-envelope BT calculation is a long way from helping (except in very specific restricted cases).

          • arcseconds

            Yes, I saw the links on Diglotting. I read the first one back when you posted it, and I’ve just finished reading the others now. It’s very interesting.

            However, while the amplication of error you note may well mean that formal work with Bayes’s theorem is always going to be hopeless in historical reasoning, and it’s certainly something that ought to be addressed by anyone trying to use it (and I certainly and wholeheartedly agree that Carrier should do some error analysis!) the problem I’m referring to is more general, more obvious, and far less sophisticated.

            If you’re a scientifically-minded materialist and I’m a theologically conservative Christian, we surely won’t agree on the priors (I think the prior probability of God’s existence is 1, for example, because They’re a necessary being! If you’re a committed materialist you’ll assess it as 0 (at any rate no greater than the chances you give materialism being false)) and we probably won’t agree on the liklihoods, either (I think the probability of the Gospels given the nonexistence of Jesus is 0, it just couldn’t happen, you might think it lower than the Gospels given the nonexistence of Jesus is lower than the Gospels given the existence of Jesus, but surely it’d still be noticeably above zero)). Just because we both use Bayes’s theorem gives us no guarantee of any substantial agreement (convergence proofs notwithstanding).

            • Ian

              Yes, totally agree.

              One of the reasons I liked Carrier’s book is that it correctly shone the light on these kinds of assumptions and judgements. I think we often assume that the reasoning is the important bit: that people need to think clearly and reason logically. Where by far the biggest determiner of conclusion is assumption, I think. So yes, BT doesn’t help much, except perhaps give you a reason to talk about the assumptions. I just have a problem where talking about those assumptions is replaced by ‘coming up with a number and making it a little worse to avoid bias’. That’s a silly approach, imho.

  • arcseconds

    Well, clearly the reason why Plato doesn’t follow the ‘two generation’ rule is that he really was the child of Apollo.

  • the_Siliconopolitan

    I suppose North Korean generations are short enough, that one could get away with saying it took two those to make the Kims divine.

  • Billy Butterfield

    Over on the far right of this page>>>>>>>>Every time I open a webpage, what ever it may be I see this”Obama will not finish his term without the bottom falling out of our economy” What the hell is that? How about “Total World Economic Collapse and Obama had nothing to do with it”?

  • Billy Butterfield

    Besides that, I believe that Bart is a wonderful professor and have learned more from him than the whole lot thrown together


CLOSE | X

HIDE | X