

which shows that observing E (or L) increases the evidence for M against S in a naturalistic universe by a factor of at least 1/p. The smaller P(FN)=p (that is, the more "finelytuned" the universe is), the more likely it is that some form of multipleuniverse hypothesis is true. 8. Theological considerationsThe next section is rather more speculative, depending as it does upon theological notions that are hard to pin down, and therefore should be taken with large grains of salt. But it is worth considering what effect various theological hypotheses would have on this argument. It is interesting to ask the question, "given that observing that F is true cannot undermine N and may support it, by how much can N be strengthened (and ~N be undermined) when we observe that F is true?" It is evident from the discussion of the main theorem that the key is the denominator P(FL). The smaller that denominator, the greater the support for N. Explicitly we have P(FL)=P(FN&L)P(NL)+P(F~N&L)P(~NL) But since P(FN&L)=1 we can simplify this to P(FL)=P(NL)+P(F~N&L)P(~NL). Plugging this into the expression P(NF&L)=P(NL)/P(FL) we obtain P(NF&L) = P(NL)/[P(NL)+P(F~N&L)P(~NL))] where C=P(~NL)/P(NL) is the prior odds in favor of ~N against N. In other words, C is the odds that we would offer in favor of ~N over N before noting that the universe is "finetuned" for life. A major controversy in statistics has been over the choice of prior probabilities (or in this case prior odds). However, for our purposes this is not a significant consideration, as long as we don't choose C in such a way as to completely rule out either possibility (N or ~N), i.e., as long as we haven't made up our minds in advance. This means that any positive, finite value of C is acceptable. One readily sees from this formula that for acceptable C 1) as P(F~N&L)>0, P(NF&L)>1; where '>' means "approaches as a limit" and the last result follows from the fact that P(NL)+P(~NL)=1. So, P(NF&L) is a monotonically decreasing function of P(F~N&L) bounded from below by P(NL). This confirms the observation made earlier, that noting that F is true can never decrease the evidential support for N. Furthermore, the only case where the evidential support is unchanged is when P(F~N&L) is identically 1. This is interesting, because it tells us that the only case where observing the truth of F does not increase the support for N is precisely the case when the likelihood function P(Fx&L), evaluated at F, and with x ranging over N and ~N, cannot distinguish between N and ~N. That is, the only way to prevent the observation F from increasing the support for N is to assert that ~N&L also requires F to be true. Under these circumstances we cannot distinguish between N and ~N on the basis of the data F. In a deep sense, the two hypotheses represent, and in fact, are the same hypothesis. Put another way, to assume that P(F~N&L)=1 is to concede that life in the world actually arose by the operation of an agent that is observationally indistinguishable from naturalistic law, insofar as the observation F is concerned. In essence, any such agent is just an extreme version of the "Godofthegaps," whose existence has been made superfluous as far as the existence of life is concerned. Such an assumption would completely undermine the proposition that it is necessary to go outside of naturalistic law in order to explain the world as it is, although it doesn't undermine any argument for supernaturalism that doesn't rely on the universe being "lifefriendly". So, if supernaturalism is to be distinguished from naturalism on the basis of the fact that the universe is F, it must be the case that P(F~N&L)<1. Otherwise, we are condemned to an unsatisfying kind of "Godofthegaps" theology. But what sort of theologies can we consider, and how would they affect this crucial probability? To make these ideas more definite, we consider first a specific interpretation that is intended to imitate, albeit crudely, how the assumption of a relatively powerful and inscrutable deity (such as a generic JudeoChristianIslamic deity might be) could affect the calculation of the likelihood function P(F~N&L). We suggest that any reasonable version of supernaturalism with such a deity would result in a value of P(F~N&L) that is, in fact, very small (assuming that only a small set of possible universes are F). The reason is that a sufficiently powerful deity could arrange things so that a universe with laws that are not "lifefriendly" can sustain life. Since we do not know the purposes of such a deity, we must assign a significant amount of the likelihood function to that possibility. Furthermore, if such a deity creates universes and if the "finetuning" claims are correct, then most lifecontaining universes will be of this type (i.e., containing life despite not being "lifefriendly"). Thus, all other things being equal, and if this is the sort of deity we are dealing with, we would expect to live in a universe that is ~F. To assert that such a deity could only create universes containing life if the laws are lifefriendly is to restrict the power of such a deity. And to assert that such a deity would only create universes with life if the laws are lifefriendly is to assert knowledge of that deity's purposes that many religions seem reluctant to claim. Indeed, any such assertion would tend to undermine the claim, made by many religions, that their deity can and does perform miracles that are contrary to naturalistic law, and recognizably so. Our conclusion, therefore, is that not only does the observation F support N, but it supports it overwhelmingly against its negation ~N, if ~N means creation by a sufficiently powerful and inscrutable deity. This latter conclusion is, by the way, a consequence of the Bayesian Ockham's Razor [Jefferys, W.H. and Berger, J.O., "Ockham's Razor and Bayesian Analysis," American Scientist 80, 6472 (1992)]. The point is that N predicts outcomes much more sharply and narrowly than does ~N; it is, in Popperian language, more easily falsifiable than is ~N. (We do not wish to get into a discussion of the Demarcation Problem here since that is out of the scope of this FAQ, though we do not regard it as a difficulty for our argument. For our purposes, we are simply making a statement about the consequences of the likelihood function having significant support on only a relatively small subset of possible outcomes.) Under these circumstances, the Bayesian Ockham's Razor shows that observing an outcome allowed by both N and ~N is likely to favor N over ~N. We refer the reader to the cited paper for a more detailed discussion of this point. Aside from sharply limiting the likely actions of the deity (either by making it less powerful or asserting more human knowledge of the deity's intentions), we can think of only one way to avoid this conclusion. One might assert that any universe with life would appear to be "lifefriendly" from the vantage point of the creatures living within it, regardless of the physical constants that such a universe were equipped with. In such a case, observing F cannot change our opinion about the nature of the universe. This is certainly a possible way out for the supernaturalist, but this solution is not available to Ross because it contradicts his assertions that the values of certain physical constants do allow us to distinguish between universes that are "lifefriendly" and those that are not. And, such an assumption does not come without cost; whether others would find it satisfactory is problematic. For example, what about miracles? If every universe with life looks "lifefriendly" from the inside, might this not lead one to wonder if everything that happens therein would also look to its inhabitants like the result of the simple operation of naturalistic law? And then there is Ockham's Razor: What would be the point of postulating a supernatural entity if the predictions we get are indistinguishable from those of naturalistic law? 9. But which deity?In the previous section, we have discussed just one of many sorts of deities that might exist. This one happens to be very powerful and rather inscrutable (and is intended to be a model of a generic JudeoChristianIslamic sort of deity, though believers are welcome to disagree and proposeand justifytheir own interpretations of their favorite deity). However, there are many other sorts of deities that might be postulated as being responsible for the existence of the universe. There are somewhat more limited deities, such as Zeus/Jupiter, there are deities that share their existence with antagonistic deities such as the Zoroastrian AhuraMazda/Ahriman pair of deities, there are various Native American deities such as the trickster deity Coyote, there are Australian, Chinese, African, Japanese and East Indian deities, and even many other possible deities that no one on Earth has ever thought of. There could be deities of lifeforms indigenous to planets around the star Arcturus that we should consider, for example. Now when considering a multiplicity of deities, say D_{1},D_{2},...,D_{i},..., we would have to specify a value of the likelihood function for each individual deity, specifying what the implications would be if that deity were the actual deity that created the universe. In particular, with the "finetuning" argument in mind, we would have to specify P(FD_{i}&L) for every i (probably an infinite set of deities). Assuming that we have a mutually exclusive and exhaustive list of deities, we see the hypothesis ~N revealed to be composite, that is, it is a combination or union of the individual hypotheses D_{i} (i=1,2,...). Our character set doesn't have the usual "wedge" character for "or" (logical disjunction), so we will use 'v' to represent this operation. We then have ~N = D_{1} v D_{2} v...v D_{i} v... Now, the total prior probability of ~N, P(~NL), has to be divvied up amongst all of the individual subhypotheses D_{i}:
where 0<P(D_{i})<P(~NL)<1 (assuming that we only consider deities that might exist, and that there are at least two of them). In general, each of the individual prior probabilities P(D_{i}L) would be very small, since there are so many possible deities. Only if some deities are a priori much more likely than others would any individual deity have an appreciable amount of prior probability. This means that in general, P(D_{i}L)<<1 for all i. Now when we originally considered just N and ~N, we calculated the posterior probability of N given L&F from the prior probabilities of N and ~N given L, and the likelihood functions. Here it would be simpler to look at prior and posterior odds. These are derived straightforwardly from probabilities by the relation Odds = Probability/(1  Probability). This yields a relationship between the prior and posterior odds of N against ~N [using P(NF&L)+P(~NF&L)=1]:
The Bayes Factor and Prior Odds are given straightforwardly by the two ratios in this formula. Since P(FN&L)=1 and P(F~N&L)<=1, it follows that the posterior odds are greater than or equal to the prior odds (this is a restatement of our first theorem, in terms of odds). This means that observing that F is true cannot decrease our confidence that N is true. But by using odds instead of probabilities, we can now consider the individual subhypotheses that make up ~N. For example, we can calculate prior and posterior odds of N against any individual D_i. We find that
This follows because (by footnote 2) P(N F&L) = P(F N&L)P( NL)/P(FL), and the P(FL)'s cancel out when you take the ratio. Now, even if P(FD_{i}&L)=1, which is the maximum possible, the posterior odds against D_{i} may still be quite large. The reason for this is that the prior probability of ~N has to be shared out amongst a large number of hypotheses D_{j}, each one greedily demanding its own share of the limited amount of prior probability available. On the other hand, the hypothesis N has no others to share with. In contrast to ~N, which is a compound hypothesis, N is a simple hypothesis. As a consequence, and again assuming that no particular deity is a priori much more likely than any other (it would be incumbent upon the proposer of such a deity to explain why his favorite deity is so much more likely than the others), it follows that the hypothesis of naturalism will end up being much more probable than the hypothesis of any particular deity D_{i}. This phenomenon is a second manifestation of the Bayesian Ockham's Razor discussed in the Jefferys/Berger article (cited above). In theory it is now straightforward to calculate the posterior odds of N against ~N if we don't particularly care which deity is the right one. Since the D_{i} form a mutually exclusive and exhaustive set of hypotheses whose union is ~N, ordinary probability theory gives us
Assuming we know these numbers, we can now calculate the posterior odds of N against ~N by dividing the above expression into the one we found previously for P(NF&L). Of course, in practice this may be difficult! However, as can be seen from this formula, the deities D_{i} that contribute most to the denominator (that is, to the supernaturalistic hypothesis) will be the ones that have the largest values of the likelihood function P(FD_{i}&L) or the largest prior probability P(D_{i}L) or both. In the first case, it will be because the particular deity is closer to predicting what naturalism predicts (as regards F), and is therefore closer to being a "Godofthegaps" deity; in the second, it will be because we already favored that particular deity over others a priori. 10. Final commentsSome make the mistake of thinking that "finetuning" and the anthropic principle support supernaturalism. This mistake has two sources. The first and most important of these arises from confusing entirely different conditional probabilities. If one observes that P(FN) is small (since most hypothetical naturalistic universes are not "finetuned" for life), one might be tempted to turn the probability around and decide, incorrectly, that P(NF) is also small. But as we have seen, this is an elementary blunder in probability theory. We find ourselves in a universe that is "finetuned" for life, which would be unlikely to come about by chance (because P(FN) is small), therefore (we conclude incorrectly), P(NF) must also be small. This common mistake is due to confusing two entirely different conditional probabilities. Most actual outcomes are, in fact, highly improbable, but it does not follow that the hypotheses that they are conditioned upon are themselves highly improbable. It is therefore fallacious to reason that if we have observed an improbable outcome, it is necessarily the case that a hypothesis that generates that outcome is itself improbable. One must compare the probabilities of obtaining the observed outcome under all hypotheses. In general, most, if not all of these probabilities will be very small, but some hypotheses will turn out to be much more favored by the actual outcome we have observed than others. The second source of confusion is that one must do the calculations taking into account all the information at hand. In the present case, that includes the fact that life is known to exist in our universe. The possible existence of hypothetical naturalistic universes where life does not exist is entirely irrelevant to the question at hand, which must be based on the data we actually have. In our view, similar fallacious reasoning may well underlie many other arguments that have been raised against naturalism, not excluding design and "GodoftheGaps" arguments such as Michael Behe's "Irreducible Complexity" argument (in his book, Darwin's Black Box), and William Dembski's "Complex Specified Information," as described in his dissertation (University of Illinois at Chicago). We conclude that whatever their rhetorical appeal, such arguments need to be examined much more carefully than has happened so far to see if they have any validity. But that discussion is outside the scope of this article. Bottom line: The anthropic argument should be dropped. It is wrong. "Intelligent design" folks should stick to trying to undermine N by showing ~F. That's their only hope (though we believe it to be a forlorn one).
Please Email comments on this proposed FAQ to Bill Jefferys (bill@clyde.as.utexas.edu). Michael Ikeda's work on this article was done on his own time and not as part of his official duties. The authors' affiliations are for identification only. The opinions expressed herein are those of the authors, and do not necessarily represent the opinions of the authors' employers. Copyright (C) 19972002 by Michael Ikeda and Bill Jefferys. Portions of this FAQ are Copyright (C) 1997 by Richard Harter. All Rights Reserved. 11. Footnotes[1] By definition, P(AB)=P(A&B)/P(B); it follows that also P(AB&C)=P(A&BC)/P(BC). [2] We use Bayes' Theorem in the form P(AB&K)=P(BA&K)P(AK)/P(BK) which follows straightforwardly from the identity P(AB&K)P(BK)=P(A&BK)=P(BA&K)P(AK) (a consequence of footnote 1) assuming that P(BK)>0. 12. Appendix I: Reply to Kwon (April 30, 2001)David Kwon has posted a web page in which he claims to have refuted the arguments in our article. However, he has made a simple error, which we detail below, along with comments on some of his other assertions. [Note added 040109: Kwon's original article has disappeared from the web. The above link is to the last version of his article archived by the Internet Wayback Machine via Makeashorterlink.com.] Kwon's Equation (3) reads as follows: P(NF&L) = P(N&F&L) / {P(~N&F&L) + P(N&F&L)} This is an elementary result of probability theory and we agree with it. Kwon then goes on and assumes what he calls the "finetuning" condition P(FN)<<1 from which he correctly derives Equation (8), the important part of which reads P(N&F&L) << 1 From these two results (3 and 8) Kwon derives P(NF&L)<<1 unless P(~N&F&L)<<1 Unfortunately, nothing in Kwon's "proof" shows that P(~N&F&L) is not <<1, so he cannot assert unconditionally that P(NF&L)<<1 as a consequence of his assumptions. He asserts
This, however, is incorrect, and here the "proof" falls apart. Kwon apparently recognizes that according to his Equation (3), the value of P(NF&L) is not governed by the actual size of P(N&F&L), but instead by the relative sizes of P(N&F&L) and P(~N&F&L). In particular, if P(N&F&L)<<P(~N&F&L) then P(NF&L) will be close to zero; if P(N&F&L) is approximately equal to P(~N&F&L), then P(NF&L) will be of order onehalf; and if P(N&F&L)>>P(~N&F&L), then P(NF&L) will be nearly unity. Therefore, we need to look at the ratio R = P(N&F&L)/P(~N&F&L) to see what factors govern its size and what assumptions this entails. We obtain:
Here, (A) and (D) follow from the definition of conditional probability, (B) by the WAPwhich Kwon says he acceptsand which asserts that P(FN&L)=1, (C) because the probability P(F~N&L) in the denominator is <=1, and (E) by cancellation of P(L) in numerator and denominator. Thus we see that in fact the ratio R cannot be small unless P(NL)/P(~NL) is also small. Therefore we cannot conclude that P(NF&L)<<1 unless P(NL)/P(~NL)<<1regardless of the size of P(N&F&L). But what is P(NL)/P(~NL)? Why, it is just the prior odds ratio that You assign to describe Your relative belief in N and ~N before You learn that F is true. Thus, although Kwon is correct in noting that the only way to keep P(NF&L) from being very small is to have P(~N&F&L)<<1, this does not represent a prior commitment to naturalism as he asserts. Indeed, a prior commitment to naturalism would be to assume that P(NL)/P(~NL)>>1, and as (E) shows, if we assume P(NL)/P(~NL) of order unity, which reflects a neutral prior position between the N and ~N, and not a prior commitment to naturalism, we will end up being at least neutral between N and ~N after observing that F is true, regardless of the size of P(N&F&L) and P(FN). Indeed, it requires a prior commitment to supernaturalism to get P(NF&L)<<1, because You would have to presume a priori that P(NL)<<P(~NL). Kwon has it exactly backwards. So the absolute size of P(N&F&L) and P(FN) do not tell us anything about P(NF&L); this is a confusion between conditional and unconditional probability. The only thing that counts is the ratio R. Kwon's calculation in his steps (48) is simply irrelevant to the final result. Indeed, we have the following theorem:
Thus, far from reflecting a prior commitment to naturalism as Kwon claims, the result P(~N&F&L)<<1 is a consequence of the fine tuning condition together with the adoption of an at least neutral prior position on N versus ~N. It is due to the fact that P(N&L&F) and P(~N&L&F) both have P(L)<<1 as a factor when they are expanded using the definition of conditional probability. Furthermore, it is even possible for P(~NF&L) to be very small (and therefore P(NF&L) close to unity), without making a prior commitment to naturalism. For example, suppose we adopt the neutral position P(NL)=P(~NL)=0.5; then from (B) we find that R = 1/P(F~N&L), and if P(F~N&L)<<1 then R>>1 and P(FN&L) is close to unity. But what does P(F~N&L)<<1 mean? Is this a "prior commitment to naturalism?" No, a prior commitment to naturalism would involve some conditional probability on N, not some conditional probability on F. The condition P(F~N&L)<<1 actually means that it is likely that an inhabitant of a supernaturalistically created universe would find that it is ~F: a universe where life exists despite the fact that it could not exist naturalistically, for example as a consequence of the suspension of natural law by the supernatural creator. We discussed this extensively in our article. Indeed, without psychoanalyzing the Deity and analysing its powers and intentions, it is a priori quite likely that the Deity might create universes that are ~F&L, for such universes are not excluded unless we know something about this Deity that would prevent it from creating such universes. An example of such a universe would be Paradise, and it seems unlikely that enthusiasts of the "finetuning" argument would be willing to say that the Deity would not create anything like Paradise. But the only way for them to escape from P(F~N&L)<<1 would be for them to assert that the Deity would only, or mostly, create universes that, if they contain life, are F, and we see no justification for such an assumption. Kwon makes some other incorrect statements later in his web article. He says that our argument "incorrectly attributes significance to P(NL)." Kwon here appears to have missed the fact that we are talking about Bayesian probabilities. The probability P(NL) refers to our universe, and is Your Bayesian prior probability that N is true, given that You know that L is true (which must be the case since it is a condition of reasoning that You be alive), but before You learn that F is true. It is a reflection of Your epistemological condition or state of knowledge at a particular moment in time. Thus, P(NL) has a perfectly definite meaning in our universe, although the value of P(NL) will differ from individual to individual because every individual has different background information (not explicitly called out here but mentioned in our article). Furthermore, Kwon is incorrect when he states that "P(NL) is irrelevant to our universe for the same reason that P(NF) is irrelevant." We never said that P(NF) is irrelevant, only that it is irrelevant for inference. The reason why P(NF) is irrelevant for inference is that no sentient being is unaware of L as background information. Every sentient being knows that he is alive and therefore knows that L is true; thus every final probability statement that he makes must be conditioned on L. This is not true of F. There are sentient beings in our universe, indeed in our world, that do not yet know that F is true. Most schoolchildren do not know that F is true, although they know that L is true. Probably most adults do not know that F is true. Thus, Kwon errs in drawing a parallel between P(NL) and P(NF). Kwon started with the perfectly reasonable proposal that "fine tuning" is best defined by P(FN)<<1, and attempted to derive his result. That he was unable to do this comes as no surprise to us, because one of us [whj] spent the better part of a year trying to get useful information from propositions such as P(FN)<<1, without success. All such attempts were fruitless, and the reason why is seen in our discussion. For example, suppose we were to assume in addition that P(F~N)=1. Even then, no useful result can be derived, for from this we can only determine the obvious fact that P(F&L&~N)<=1, which gives no useful information about the crucial ratio R. The inequality goes in the wrong direction! Thus, "fine tuning"P(FN)<<1tells us nothing useful, which is why in our article we concentrated instead on finding out what "life friendliness"Fand the WAP can tell us. Kwon says, "We have always known that F is true for our universe..." This is false. In fact, the suspicion that F is true is relatively recent, going only back to Brandon Carter's seminal papers in the mid1970's. Earlier, physicists such as Dirac had in fact speculated that the values of some fundamental physical constants (e.g., the fine structure constant) might have been very different in the past, which would violate F, and somewhat later other scientists (for example Fred Hoyle in the early 1950s) have used the assumption that F is true in order to predict certain physical phenomena, which were later found to be the case. Had those observations NOT been found to be true, F would have been refuted, and we would seriously have to consider ~N. Even today we do not know that our universe is F"lifefriendly"in the sense that we use the term in our article. We strongly suspect that it is true, but it is conceivable that someone will make a WAP prediction that will turn out to be false and which might refute F. Kwon incorrectly asserts that the idea that there may be other universes is "simply unscientific." Certainly many highly respected cosmologists and physicists like Andrei Linde (Stanford), Lee Smolin (Harvard) and Alexander Vilenkin (Tufts) and Nobel laureate Stephen Weinberg (Texas) would disagree with this statement. Kwon claims that the hypothesis of other universes "cannot be tested." While we might agree that testing the hypothesis of other universes will be difficult, we do not agree that the hypothesis is untestable, and neither do scientists that work in this area. Some specific tests have been suggested. For example, David Deutsch has proposed specific tests of the EverettWheeler interpretation of quantum mechanics commonly known as the "ManyWorlds" hypothesis. And recently an article that proposed another way that other universes might be detected was published (Science, Vol. 292, p. 189190, original paper archived as The Ekpyrotic Universe: Colliding Branes and the Origin of the Hot Big Bang). Regardless, our argument is not dependent on the notion that there are many other universes. It stands on its own. Kwon misunderstands the point of the "god of the gaps" argument. The problem isn't that the gap is being filled by a god, the problem is what happens if the gap is filled by physics. Then the god that filled the gap gets smaller. This is a theological problem, not an epistemological or scientific problem. We agree with Kwon that there are gaps in our physical explanation of the universe that may never be filled; but it is hoping against hope that we will never fill any of the gaps currently being touted by "intelligent design theorists" as proof of supernaturalism. Some of them are certain to be filled in time, and each time this happens, the god of the intelligent designers will be diminished. (In fact, some of them were in fact filled even before the recent crop of "ID theorists" made their argumentsthis is true of some of Michael Behe's examples, for which evolutionary pathways had already been proposed even before Behe published his book). As to Kwon's last point, that we incorrectly claim that "intelligent design theorists" incoherently assert both F and ~F. We believe that it is a correct statement that at least some are arguing ~F. It is our impression, for example, that Michael Behe is arguing that it is actually impossible, and not just highly unlikely, for certain "irreducibly complex" (IC) structures to evolve without supernatural intervention, and that is a form of ~F. Regardless, even if no one is attempting to argue from ~F to ~N, our point still stands. Attempts to prove ~N that argue from either F or P(FN)<<1 or both do not work. But attempts to prove ~N by showing ~F would work. Thus, people making anthropic and "fine tuning" arguments have hold of the wrong end of the stick. They should be trying to show that the universe is not F. It is clear that showing that the universe is not F would at one stroke prove ~N; it follows that showing that the universe is F can only undermine ~N and support N; this is an elementary result of probability theory, since it is not possible that observations of F as well as ~F would both support ~N. Since it is trivially true that observing ~F does support ~N, observing F must undermine it. Put another way, it seems to us that Michael Beheif we understand himis making the right argument from a logical and inferential point of view, and Hugh Ross is making the wrong argument. If it turns out that Behe is not making the argument we think he is, then it is still the case that Hugh Ross is making the wrong argument. Kwon makes some remarks about "nontheists" that seem to indicate that he thinks that only "nontheists" would argue as we have. This is not the case. The issue here is whether the "fine tuning" argument is correct. It is exactly analogous to the centuries of work done on Fermat's last theorem. It is likely that most mathematicians thought that the theorem was true for most of that time, yet they continued to reject proofs that had flaws in them. They rejected them not because they thought Fermat's last theorem was false, but because the proofs were wrong. They even rejected Wiles' first attempt at a proof, because it was (slightly) flawed. In the same way a theist can and should reject a flawed "proof" of the existence of God. Our argument is that the fine tuning arguments are wrong, and no one should draw any conclusions about our personal beliefs from the fact that we say that these arguments are wrong. Conclusion: Kwon's "proof" is fatally flawed. He incorrectly asserts that the only way to keep P(NF&L) from being very small is to assume naturalism a priori. Quite the contrary, the only way to make P(NF&L) small is to assume supernaturalism a priori. Kwon apparently does not understand the significance of some of the Bayesian probabilities we use; this is forgiveable in a sense since Bayesian probability theory is still misunderstood by most people, even those with some training in probability theory...but it means that Kwon should withdraw these comments until he understands Bayesian probability theory well enough to criticize it. Kwon's assertion that we have always known that our universe is F is false; his assertion that the existence of other universes is untestable is also false, and in any case is not relevant to our main argument. Finally, he mistakenly thinks that the godofthegap argument somehow tells against science. It does not, since it is purely a theological conundrum, not a scientific one. Nonetheless, we thank David Kwon for his serious and
attentive reading of our article and for his comments. He is the first to
attempt a mathematical rather than a polemical refutation of our argument. His
argument fails because, as we show here, it isn't possible to derive anything
useful from the finetuning proposition P(FN)<<1. When all factors are
taken into account, it is clear that the only way to end up with a final result
that P(NF&L)<<1 is to assume at the outset that supernaturalism is
almost surely true, thus begging the question. [Note added 010613: When we posted this response, we informed Mr. Kwon, so that he could either respond to our criticisms or withdraw his web page. We regret to say that up to now he has done neither.] Note added 040109: Kwon has never responded to our criticisms; his web page disappeared when he apparently finished his career as a Berkeley graduate student. It is archived and can be obtained courtesy of the Internet Wayback Machine via Makeashorterlink.com.]
This article was first posted at Bill Jefferys' Home Page. .
 
