Symposium Introduction

Belief, Control, and Evidence: Miriam McCormick’s Believing Against the Evidence

 

1. Two Connected Puzzles about Belief

It seems, when we come to consider the notion of responsible belief, we are faced with a puzzle. First, we approve and disapprove of some people’s beliefs, and not just in terms of whether we think they believe truths or falsities, but whether they’ve appropriately arrived at those beliefs. We ask, “How could you believe that?” or “What in the world makes you think so?” And yet we also realize that our beliefs are not up to us, as we, it seems, cannot directly control whether we believe, say, the moon is spherical or water is wet. Now, given that we cannot morally praise or blame people for things they cannot control, it seems we have a problem of sorts. How is it that we can both acknowledge that we don’t have direct control over our beliefs and nevertheless praise or blame people on the basis of their beliefs and the reasons they have for them?

A related puzzle arises for the conditions for belief assessment, good or bad. A longstanding view has been that the judgment of responsible belief is entirely a matter of how well the belief fits the evidence (and only the evidence) at hand. Call this view evidentialism, and there are many concerns about the evidentialist’s stark commitment. There are many beliefs, it seems, we may hold that themselves have little or no evidential backing. They may even run contrary to the evidence. But they may help us be better people. It may be a faith in the goodness of others, an unshakable optimism about our own capacities. And these commitments may, even when unrewarded with successful action or proper treatment by others, still be good attitudes to have, good beliefs that lead to good actions. Call this view pragmatism.

These two puzzles, that whether or not it makes sense to morally assess beliefs and what criteria are appropriate for assessment, are old puzzles, and they constitute the core issues of the problem area of the ethics of belief. The ethics of belief is a domain in the liminal zone between two of philosophy’s historically central, but usually distinct, subject areas: epistemology and ethics. In many ways, the name for the problem area displays the puzzles, as it seems there is a cross-categorical judgment one must make in producing practical or moral judgments about theoretical movements of reason. Moral categories are for actions, theoretical categories are for representations, and these seem, at least on their face, not to overlap. How is the ethics of belief even possible?

2. McCormick’s Two-Part Argument

Miriam McCormick’s project with Believing Against the Evidence is to answer the two puzzles of the ethics of belief. Her book has two parts, each devoted to one of the two problems. McCormick first argues that if we are to normatively assess beliefs, we must determine what beliefs are for. Establishing this “teleological framework” requires that we see what role belief has in a complete human life. For the most part, McCormick is in agreement with the evidentialist’s thought that beliefs aim at truth, so beliefs should be formed with that goal in mind—i.e., on the basis of good evidence. However, she believes that evidentialism’s default role with belief management is due to truth’s “practical value.” Given that there are many reasons that may bear on whether a belief is desirable, given its practical consequences, “it is possible for practical considerations to override the evidential (reasons) in favor of believing it” (31). Two important consequences follow. The first is the purely instrumental value of truth. As McCormick frames it:

If we lived in a world where true beliefs had no benefits, then, in my view, a proposition being true would not count at all in favor of it being believed. (45)

The second consequence is that in cases where believing the truth (or in accord with what the evidence points to as the truth) has bad practical consequences, we are permitted to believe otherwise. In particular, McCormick identifies what she calls “meaning-making beliefs”; exemplary are:

  • AIDS patients who did not believe according to the evidence that they have a short life expectancy had longer lives than those who believed according to the evidence.
  • Plane crash survivors who hold on to the belief that they will be rescued.
  • The belief that one’s child (as opposed to a stranger) is a drug dealer.
  • The belief that humans have a greater capacity for love and kindness than evil.

Not all of these meaning-making beliefs are on equal footing, however. Only the last, on McCormick’s view, is permissible without qualification. McCormick characterizes our freedom with regard to these meaning-making beliefs as:

When it comes to questions about . . . what if anything provides meaning or significance in life . . . we have some flexibility in what we can permissibly believe. (56)

Further, McCormick includes what she calls “framework beliefs,” that resist radical skeptical challenge, such as:

  • The belief that there is an actual world of external objects
  • The belief that one’s children are not automata

Once we have seen the role that belief plays in our living complete lives, McCormick holds, we see that a kind of pragmatism follows, one that puts belief into contact with a flourishing and meaningful life.

To the second puzzle, how it is even possible to normatively evaluate beliefs (which she calls “the puzzle of doxastic responsibility”), McCormick makes the case that, though we do not have direct control over what we believe, we nevertheless are capable of appraising and finding fault with each other’s beliefs. We are responsible for “the mechanisms that issue in evidentially based beliefs” (114). This is first because we can change our first-order beliefs about what we’ve seen, remembered, heard, or inferred, on the basis of reflecting on whether those sources were functioning well under the circumstances. And so, if one discovers that someone believes something on the basis of an acknowledged badly functioning perceptual system, we would think that person is blameworthy. The second reason is that we internalize these practices of holding each other to account, and so we have developed not only those reflective capacities of epistemic (and moral) self-monitoring, but we internalize how others monitor us.

McCormick concludes by noting that because she sees both practical and theoretical norms as eudaimonistic, responsibility for our actions and the character we reveal with them is symmetric with the way we are responsible for our beliefs and the intellectual character we display in assessing, assenting, and maintaining them. “That we correctly hold each other responsible for the views we have about what is true, worthy, and good, reveals that we can exercise control over these central aspects of what we are” (123).

Avatar

Response

Transparency Defended

One of the most significant contributions of Miriam McCormick’s book is the claim that belief does not require its own separate ethics, and McCormick gives an excellent case against there being a doxastic or epistemic normativity to which we must appeal when assessing the (im)permissibility of a belief. She also persuasively argues that belief is not as intimately tied to truth as many have supposed in the literature, and that the evolutionary purpose of belief is more complex and varied than truth. On these points, among others, I agree with McCormick, and on these points alone, her book represents a serious and compelling contribution to the literature on the nature of belief.

In this short piece I focus on McCormick’s discussion of transparency, making two points with respect to it. First, McCormick’s explanation of transparency which appeals to the desires of believers does not work, since the phenomenon goes further than any interest of a subject that could be endorsed in deliberation over what to believe. Second, McCormick’s cases do not show that transparency does not always hold, and this is important for the account of doxastic control she develops later in the book.

1. Permissibility and Possibility

Before we turn to transparency, let us go over some claims McCormick makes about what is permissible and what is possible when deliberating over what to believe. McCormick claims that the truth of a proposition does not always count in favour of believing it (6), and that it is permissible to violate evidentialist dicta when faced with neutral or no evidence (52). I am not going to take issue with such claims being merely about what is permissible to believe, since I agree with McCormick that (insofar as we can think of beliefs as being permissible or impermissible), there is no reason that it should always be impermissible to violate evidentialist dicta, and thus, the truth of a proposition does not always count in favour of believing it.

However, I take issue with these claims as claims about what is possible in deliberation. I agree that the truth of a proposition does not always count in favour of believing it, and I agree that there is nothing in the nature of belief which speaks against this. However, my view is that it is impossible for us to fail to treat the truth of a proposition as counting in favour of believing it in deliberation over what to believe. So, though I might note that the truth of the medication will not cure my illness does not count in favour of believing it (if I also know that positive thinking can have a positive effect on health), if, in deliberation over what to believe, I answer the question whether it is true that the medication will not cure my illness in the positive, I cannot but both treat that as a reason to believe, and, also, go ahead and believe.

McCormick also makes some claims which are more straightforward about what is possible in deliberation over what to believe. She says that it is possible to form a belief even when one’s recognized evidential reasons do not support it, and that non-alethic considerations can be a part of doxastic deliberation (27). What we should understand by such considerations being a part of deliberation is important, and to be discussed below (§2.2). She also claims that when evidence is neutral, freedom is involved in what we believe (56), that reasoning is not centered just on the truth of a proposition but on what would be best to believe (111), and that practical advantages can override the disvalue of false beliefs (50n18). It is these claims, about what is possible in doxastic deliberation that I will discuss in what follows.

2. Transparency

Transparency is the putative fact that “when asking oneself whether to believe that p” one must “immediately recognize that this question is settled by, and only by, answering the question whether p is true” (Shah 2003: 447). I am going to frame the discussion here in terms of transparency since, though McCormick mentions it explicitly only a few times in the book (26–27; 30; 110–11), it is clear that much of what she says about doxastic deliberation speaks to the question of whether and why transparency holds.

Importantly, proponents of transparency do not deny that non-alethic factors can play a role in the fixation of beliefs, their claim is only that “one cannot deliberatively, and in full awareness…let one’s beliefs be guided by anything but truth” (Steglich-Petersen 2006: 503). So though the formation of beliefs may well be influenced by non-alethic factors, they are not acknowledged as reasons to believe. It is on this point that McCormick challenges proponents of transparency, and that I reply on their behalf.

2.1 McCormick’s Explanation

McCormick denies that transparency holds as a matter of conceptual truth about belief but goes on to say that she does not deny that “deliberating about whether to believe something is often the same as questioning whether it is true” (30, my emphasis). So she seeks to give an explanation of transparency at a reduced strength (that is, as contingently characterizing the belief formation of actual world believers, and as not always doing so). Though I agree that transparency is a contingent feature of belief, I disagree with McCormick’s claim that it does not always characterize the doxastic deliberation of actual-world believers, a point I will take up in the next section.

Before that, let us turn to McCormick’s explanation of transparency. She claims that transparency can be explained by appeal to the interest we have in our beliefs being true (30–31). Nishi Shah objects to such an account on the grounds that though it can explain truth’s relevance in deliberation over what to believe, it cannot explain truth being dominant in such deliberation (Shah 2006: 490). In response, McCormick draws on Hilary Kornblith’s claim that insofar as one has goals, one has a reason to care about one’s beliefs being true (Kornblith 1993: 161, cited in McCormick 2015: 31). She takes it that this “general interest” in truth is enough to explain transparency (31).

I have argued elsewhere (Sullivan-Bissett 2017b) that an appeal to an agent’s intentions or desires cannot explain transparency. My discussion there takes place in the context of the teleological account of belief, according to which it is necessary to belief that it aims at truth, and this aim is realized in the intentional aims of the believer. Though McCormick disagrees with this characterization of what is necessary to belief (21–23), her explanation of transparency is similar to that given by this account. Asbjørn Steglich-Petersen has argued that transparency in doxastic deliberation can be explained “by the aim one necessarily adopts” in asking oneself the question whether to believe that p “because the only considerations that could decide whether believing p would further promote that aim are considerations that bear on whether p is true” (Steglich-Petersen 2008: 546). However, if belief had an aim, then given that aims are by their very nature weighable (Owens 2003), other considerations could weigh into the deliberation which results in the fixation of belief. Transparency though is the immediate and non-inferential collapsing of one question into another, and so it cannot be an aim which explains this.

What is especially interesting about McCormick’s position is that such a line of attack will not work, because she allows for the possibility of other aims or desires entering deliberation over what to believe. Where the evidence is neutral or absent, she thinks one can be permitted (and presumably it is possible to) go ahead and believe for non-alethic reasons. The above objection to explaining transparency by appeal to believers’ interests then has not yet gotten a grip.

However, we only have to change the point a little to see that even on McCormick’s account—with her freedom to believe for non-alethic reasons under certain conditions—cannot withstand this kind of worry. If it were my interest in believing truly which explained why I moved from whether to believe that p to whether p is true in deliberation over what to believe, I ought to be able to ignore that interest or prioritize a different interest, especially as McCormick thinks that this is possible in cases of neutral or absent evidence. However, consider the following case: I offer you one billion pounds to believe that I am seven feet tall. You can see me, and so you have very good evidence that I am not seven feet tall, indeed, I am probably not even very close to six feet tall. So this is not a case where the evidence for the proposition is absent. Nor is it a case where the evidence is neutral, since you have no reasons to think that I really am seven feet tall and that you might be undergoing a visual illusion. So we are not in one of the cases where McCormick thinks it is permissible (and possible) to go ahead and believe, and indeed, she notes that it is not possible to believe that p when one takes p to be false (56).

However, if it were an interest in truth that explained the nature of deliberation over what to believe, in a case such as this, you ought to be able to go ahead and believe that I am seven feet tall. But you cannot do this, as McCormick notes when she says that we cannot believe for money (57). This claim occurs in the context of a discussion about what is required for control, McCormick’s point here is that not being able to believe for money does not speak against our being able to believe for practical reasons. That might be right. But my point here is that it is implausible that a subject’s general interest in having true beliefs could always trump her interest in, for example, being hugely rich.

2.2 McCormick’s Cases

McCormick offers some cases which she claims are ones in which the subject believes for non-alethic reasons, and therefore transparency does not always hold. I will suggest that where non-alethic considerations look to be playing a role in affecting what the subject believes, they do so by changing the standards required for believing in a particular context, they do not do so by providing non-alethic reasons for believing, which are considered as such from the deliberative perspective. This characterization of such cases preserves transparency.1

I do not deny that a subject can take some pragmatic factor to be a reason for believing in one sense. For example, consider a subject offered a huge financial prize for believing something for which the evidence in support and against it is closely balanced. I think that such a subject could recognize that the prize represents excellent reason for belief, and that if she could cause herself to believe, then she would have reason to do so. I only deny—and McCormick endorses—that the subject could note that the prize represents excellent reason for believing, when deliberating over whether to believe, and that she could respond to that as a reason to believe.

Let us look then to cases which McCormick takes to be cases of believing for practical reasons. One such case comes from Carl Ginet:

Sam is on a jury deliberating whether to find the defendant guilty as charged; if certain statements of a certain witness in the trial are true, then the defendant cannot have done what he is charged with; Sam deliberates about whether to believe those statements, to believe the prosecutor’s insinuations that the witness is lying, or to withhold belief on the matter altogether. He decides to believe the witness and votes to acquit. (Ginet 2001, cited in McCormick 2015: 27–28)

McCormick suggests that there is no reason, other than some assumed view of the nature of reasons, not to take Sam to be treating the practical considerations in this case as reasons to believe that the defendant is innocent.2

Another kind of case is that of so-called meaning making beliefs, for example, survivors of a plane crash believing that they can live (against overwhelming evidence suggesting that they cannot). McCormick notes that in cases like this we should excuse believers, even if some of such cases might result in pernicious believing (55). Other beliefs of this kind include those about what provides meaning in life or what happens after death. In such cases, when the evidence is neutral, McCormick claims that there is some freedom in our beliefs on these matters, and it is permissible to believe for practical reasons.

Finally, consider also beliefs about loved ones. Drawing on discussion by Sarah Stroud (2006), McCormick suggests that believing well of a friend may well involve ignoring or suppressing evidence, and would thus be a case of pernicious believing (though again, perhaps excusable). In cases where the evidence is closely balanced though, McCormick claims that, providing the motive is good (feelings of love and generosity), believing good of one’s loved one is permissible (61).3

Though McCormick considers a number of explanations offered by Shah of these putative cases of believing for practical reasons which preserve transparency, there is an alternative route for the transparency proponent to take in light of such cases. She could (and, I think, should) say that in cases like these practical considerations are not being treated by the subject as reasons to believe in doxastic deliberation. Rather, these considerations function to modify the standards for sufficient evidence required for belief, and not as reasons for the subject to go ahead and believe.

We should prefer this explanation, since such an account would also have the resources to accommodate our not being able to believe for monetary reward. After all, as mentioned earlier, McCormick herself notes that we cannot believe something if we think it is false (56). Also, as has already been noted, it is open to the friend of transparency to allow non-alethic factors to play a role in the fixation of a belief, this is something all sides of the debate recognize (indeed, deliberation being transparent to truth considerations does not even rule out that the subject could note that such a thing is going on! [Archer 2015: 11–12]).

So an alternative account of these cases might look something like this: when there is practical cost or benefit at stake, one’s standards for belief are altered, but the reasons which motivate going ahead and believing are purely alethic. Practical considerations can affect a subject’s standard for being a believer, without those considerations functioning as reasons for which the subject goes ahead and believes from the deliberative perspective.

It is important then to distinguish practical reasons for believing, and the influence of non-alethic considerations on standards for belief. Once we do that, it is very much up for grabs to describe McCormick’s cases as ones which do in fact exhibit transparency. They are cases in which non-alethic considerations change what the subject requires as sufficient evidence to form a belief, and not as cases in which such considerations function as reasons for belief from the deliberative perspective.

It might be thought that the way we describe these cases is immaterial; why should we worry whether these practical considerations enter deliberation as reasons for belief, as McCormick suggests, rather than merely change the evidence required to be a believer, as I suggest? We should worry because how these cases get described matters for the work which comes later in the book. In particular, McCormick’s account of doxastic responsibility is based on the idea of guidance control, and this kind of control requires reasons-responsiveness in deliberation over what to believe (112). This means that if practical considerations change the standards required to be a believer across contexts, and are not reasons to which the deliberator can respond in forming beliefs, then we do not have the kind of doxastic control McCormick claims that we do. So it matters for McCormick’s project that she shows that her way of characterizing these cases is the correct way of doing so. I think she has not yet shown this.

3. Conclusions

I have raised two points about McCormick’s work on transparency in doxastic deliberation. First, her explanation of the phenomenon falls short, since it is implausible that our interest in truth could trump other interests in certain cases. Second, McCormick has not shown that transparency does not always characterize doxastic deliberation, the cases she suggests as examples of non-transparent doxastic deliberation are better understood in a transparency-preserving way.

Acknowledgments

I acknowledge the support of the European Research Council (Grant agreement: 616358) for funding the research of which this is a part.

References

Archer, Sophie. 2015. “Defending Exclusivity.” Philosophy and Phenomenological Research. doi: 10.1111/phpr.12268.

Ginet, Carl. 2001. “Deciding to Believe.” In Steup, Matthuas (ed.), Knowledge, Truth, and Duty: Essays on Epistemic Justification, Responsibility, and Virtue. New York: Oxford University Press, pp. 63–76.

Kornblith, Hilary. 1993. “Epistemic Normativity.” Synthese. Vol. 94, no. 3, pp. 357–76.

McCormick, Miriam. 2015. Believing Against the Evidence: Agency and the Ethics of Belief. New York: Routledge.

McHugh, Conor. 2012. “Beliefs and Aims.” Philosophical Studies. Vol. 160, no. 3, pp. 425–39.

McHugh, Conor. 2013. “The Illusion of Exclusivity.” European Journal of Philosophy. Vol. 23, no. 4, pp. 1117–36.

Owens, David. 2003. “Does Belief Have an Aim?” Philosophical Studies. Vol. 115, no. 3, pp. 283–305.

Shah, Nishi. 2003. “How Truth Governs Beliefs.” Philosophical Review. Vol. 112, no. 4, pp. 447–82.

Shah, Nishi. 2006. “A New Argument for Evidentialism.” Philosophical Quarterly. Vol. 56, no. 225, pp. 481–98.

Steglich-Petersen, Asbjørn. 2006. “No Norm Needed: On the Aim of Belief.” Philosophical Quarterly. Vol. 56, no. 225, pp. 449–516.

Steglich–Petersen, Asbjørn. 2008. “Does Doxastic Transparency Support Evidentialism?” Dialectica. Vol. 62, no. 4, pp. 541–47.

Stroud, Sarah. 2006. “Epistemic Partiality in Friendship.” Ethics. Vol. 116, no. 3, pp. 498–524.

Sullivan-Bissett, Ema. 2017a. “Aims and Exclusivity.” European Journal of Philosophy. doi: 10.1111/ejop.12183.

Sullivan-Bissett, Ema. 2017b. “Explaining Doxastic Transparency: Aim, Norm, or Function?” Synthese. doi: 10.1007/s11229-017-1377-0.


  1. Elsewhere I have argued in a similar away against Conor McHugh’s claim (2012; 2013) that exclusivity to truth does not characterize deliberation (Sullivan-Bissett 2017a).

  2. However, towards the end of the book, in considering more of Ginet’s cases, McCormick suggests that such cases are not, after all, cases of deciding to believe. Rather they are cases of deciding to act a certain way, and belief then follows (80–82).

  3. I think that the case of different doxastic treatment of loved ones muddies the water, discussion here is complicated by the idea that the relationship itself might be evidence of the friend’s or loved one’s good standing (Archer 1015: 8n11). Also, McCormick does not talk here of practical reasons for believing, but rather talks of motives stemming from “feelings of love and generosity” (61).

  • Avatar

    Miriam Schleifer McCormick

    Reply

    Reply to Sullivan-Bissett

    Sullivan-Bissett focuses her discussion on my view of transparency and criticizes it in two ways. First, she argues that my explanation of the common phenomenon that transparency claims to explain is untenable and, second, that the cases I put forth to show that one can believe for practical reasons (and so are purported counterexamples to transparency) are not convincing. She argues that an alternative way of understanding these cases which preserves transparency is preferable.

    Again, transparency is understood as the putative fact that in forming beliefs non-alethic factors cannot be “acknowledged as reasons to believe.”

    I argue that the fact that deliberating about whether to believe p can frequently be understood as asking whether p is true can be explained by the overwhelming interest we have in having true beliefs. Sullivan-Bissett then asks why, in cases where this interest is trumped by other ones, I could not prioritize a different interest? She considers the case which many think conclusively demonstrates that one cannot believe for non-evidential reasons, that of being offered a huge reward for believing something for which I have very good evidence that is not true, for example that you are seven feet tall when I can see that you are not: “If it were my interest in believing truly which explained why I moved from whether to believe that p to whether p is true in deliberation over what to believe, I ought to be able to ignore that interest or prioritize a different interest.” I ought to able to prioritize my interest in money over my interest in truth and believe you are seven feet tall.

    To respond to this concern it is important to clarify what I mean by the “interest in truth” which we all have. I do not mean an interest in particular truths; it is quite clear that many particular truths can fail to serve one’s interests or even undermine them. Michael Lynch, whose views I discuss in chapter 2, has a chapter titled “Truth Hurts” where he discusses cases of “dangerous knowledge” such as knowing how to make nuclear weapons—or knowledge related to cloning and genetic engineering. We may have interests in not gaining truths about some of these matters. Further, accumulating true beliefs about entirely trivial matters may be against my interests because of the sapping of cognitive resources that could be better employed elsewhere.

    Rather, the interest in truth that explains cases of seeming transparency is a “general interest.” Another way of putting it is that we (and by “we” I mean all of us who have any goals at all) have a very strong interest in our beliefs generally being true. For this interest to be met requires developing a cognitive system that generally produces true beliefs. That such a system, akin to having a circulatory system that allows nutrients and oxygen to be distributed throughout the body, cannot be easily overridden makes sense. But, as I say when discussing David Velleman’s view in chapter 1, the best system is one that allows for some deviations. Eyes that work well will allow us to see, cognitive systems that work well will lead us to form true beliefs. Both our eyes and our beliefs are regulated so that we can survive and perhaps flourish. Imagine if there were some contexts in which blurry vision would better serve our purposes; then, an ocular system that allowed for clear sight when needed and blurred when that was called for would be the optimal design. The same holds for beliefs. If it turns out that at times the truth of beliefs is irrelevant to our survival and flourishing, then the best system would be one that is, in general, regulated for truth in the manner described by Velleman, but can be suspended at times. Both our respiratory and circulatory systems operate in this way. Breath and blood flow regularly, but at times their deviation is a good thing. So, when we are in dangerous situations, our breath quickens as adrenaline surges. When we have a cut, blood flows quickly to the area and swells so that the cut can heal (22).

    So, I have argued that the exceptional cases must be ones where one is not ignoring or suppressing evidence but, rather, when the evidence is neutral or absent. In some such cases, I have argued, non-evidential factors can operate as reasons for believing. But Sullivan-Bissett is not convinced that these cases are best understood as cases of believing for practical reasons but, instead, as cases where such factors alter the standards for what counts as sufficient evidence for believing. Skepticism about the possibility of believing for practical reasons is widespread. Evidentialists tend not to be fazed by examples where it seems as if non-evidential factors are operating as reasons. All will admit that non-evidential considerations, in fact, can contribute causally to what one believes. Many (though not all) will even say that such considerations can count as reasons for these subjects to believe what they do, and, again, such reasons may partially cause the beliefs. What they deny, however, is that these subjects believe for these non-evidential reasons.

    To try to articulate what it means to believe for a reason, as opposed to the reason simply being one of the causes of the belief is not simple and philosophers disagree on the nature of the relationship. One finds a parallel problem when trying to articulate what it means to act for a reason as opposed to the reason simply being a cause of an action. But at least one condition that must be met is that the subject recognize it as her reason. This is not quite enough for one can take a third-person perspective on oneself and see that one of the reasons was the cause without it having operated as the basis or grounds for the belief or action. A fairly strong constraint on what counts as a reason for Fing, one argued for by Shah in his defense of transparency, is that it be capable of operating as premise in deliberation. I have deliberately constructed cases where the subjects consciously and explicitly employ these reasons in their deliberation, where they can say to themselves: “I am going to believe p (at least partly) because it is good for me to do so,” and if this is the case then the subjects are believing for these non-evidential reasons. One may think even more is needed, that these agents, once having formed their beliefs, must be able to recognize their non-evidential reasons for believing. I think this is possible in certain cases; you can see that some of considerations sustaining your belief that your lover is faithful are non-evidential. But that one needs to be able to recognize one’s reason for believing once one believes is an overly demanding constraint on what is required to believe for a reason. Consider an ordinary case of believing for an evidential reason. You believe the match will go ahead and the reason you believe this is that it is sunny. If we accept Shah’s strong constraint on reasons, namely that for a consideration to be a reason for you to F, it must be a consideration from which you could reason to F-ing then what makes the fact that it is sunny outside a reason for your belief is that this fact is used in your reasoning to the conclusion that the match will go ahead. What gives this constraint plausibility is that reasons should guide us. But to add the further constraint that for a consideration to be a reason one must have full conscious awareness of it being a reason for which one Fs would imply that we rarely believe (or act for that matter) for reasons. You form the belief that the match will go ahead and so go the match. If you do not maintain full consciousness of why you so believe, do you thereby no longer believe for a reason?1

    If it is conceptually possible to believe for practical reasons, as Sullivan-Bissett (unlike others) allows then one needs to examine each purported counterexample to transparency and decide which explanation is more plausible. Perhaps it is the case that sometimes these non-evidential factors operate so that the standards for what counts as sufficient reason for believing are raised or lowered but why think that this is always the case? Sullivan-Bissett says that “the cases [I] suggests as examples of non-transparent doxastic deliberation are better understood in a transparency-preserving way.” But all I see is that they can be understood in a transparency-preserving way, not that they are better understood. In more recent work I have introduced new cases that try to increase the plausibility of believing for non-evidential reasons. Those who are convinced such cases are impossible will always find some way redescribe them so that such that only evidential considerations count as reasons. But if transparency is a contingent feature of belief, as Sullivan-Bissett acknowledges, then cases of actual-world believers believing for non-evidential reasons may well exist.


    1. Jonathan Way (2016) has argued that for the constraint on reasoning to preclude non-evidential reasons for belief it needs to be this very strong constraint, but unlike the weaker constraint that just says it needs to be capable of motivating or of operating in deliberation or reasoning “the condition looks gerrymandered to support an argument for evidentialism” (812). Susanna Rinard (2015) has recently argued that the characterizations of the basing relation which rule out non-evidential reasons for belief rule out a lot more, namely they rule out non-evidential reasons for action as well.

    • Avatar

      Dustin Nelson

      Reply

      The Basing Relation

      Miriam, I’m wondering if this might be helpful.

      Several years ago, I gave a paper about believing for practical reasons when one’s evidence is neutral.

      It was focused on Tom Kelly’s discussion of the so-called “basing relation.”

      In that paper, I point out something odd in one of Kelly’s examples.

      Consider the following:
      Jones believes that Smith has won the lottery.
      Based on this belief, Jones believes that Smith is rich.
      It turns out that Smith has not won the laundry, but he is nonetheless rich.

      So, Jones has a belief about Smith (that he’s rich) that isn’t based on evidence. It’s based on a false belief.

      I argued that this opens the door for a belief to be based on something other than evidence alone.

      Thoughts?

    • Avatar

      Miriam Schleifer McCormick

      Reply

      Proper and Improper Basing

      Thanks Dustin. I have been thinking (and writing) about the basing relation quite a bit lately. I think very few accounts rule our practical reasons being reasons for which we believe; that is, that rules out beliefs being based on practical reasons. But I think your observations here are not sufficient to establish that is so. In the case Jones’s belief it may be that his belief is still based on evidence, that is, he perceives it as evidence, but it is faulty or bad evidence. In such a case his belief is improperly based as the reason cannot serve as a justification for his belief about Smith, and an account of basing should be able to distinguish and explain the difference between proper and improper basing.
      Many evidenitalists (in the sense I mean—those who claim only evidence can be a reason for belief) will view purported examples of believing for practical reasons in similar terms. In such cases, they may say, a person is accepting an unwarranted evidential principle. So, though a third-person perspective can indicate that his belief is not properly based on evidential reasons, from a first-person perspective, the agent mistakenly sees desirability as an indicator of truth.

Avatar

Response

Are Non-Evidentially Based Beliefs Epistemically Evaluable?

Introduction

In Believing Against the Evidence, Miriam McCormick presents a modest alternative to strict evidentialist views regarding the ethics of belief. These views hold that the norms that govern the practice of believing are strictly evidential. Roughly, our beliefs should track our evidence, and no practical considerations may count in an epistemic evaluation of a belief. McCormick argues, however, that “it is sometimes permissible to violate evidentialist dicta when faced with neutral evidence or no evidence at all” (52). In these instances, it can be permissible to believe without sufficient evidence. I will ultimately argue, however, that McCormick has failed to articulate a non-evidentialist criterion sufficient for epistemic evaluation.

The Evidentialist—Pragmatist Tension

McCormick does not outright defend pragmatism regarding the ethics of belief. Rather, her view is very narrowly construed to cover only a certain class of beliefs that arise under a very special set of circumstances. The very special set of circumstances is when one considers a proposition for which the evidence for its truth is lacking or when the evidence is neutral regarding the truth of the proposition. McCormick, then, does not eschew evidentialism. In fact, she says “if one has evidence that one’s belief is false, and maintains the belief by deliberately ignoring that evidence, then one’s practical belief is impermissible” (52). McCormick, then, seems to be arguing not that it is permissible to believe against the evidence, but rather, that it can be permissible to believe without evidence.

I now want to ask whether McCormick’s view actually gains anything over the evidentialist. Here is, roughly, the contrast that is being drawn by McCormick:

(1) “Any belief formed against the evidence is impermissible.” (1)

(2) “If a belief helps us flourish without being evidentially based, it can be permissible to hold that belief.” (52)

The first claim is, of course, the evidentialist view. The second is McCormick’s. And if the second is correct, then, presumably, the first one cannot be correct. I will argue that, as they’re presented, (1) and (2) are not as incompatible as McCormick suggests. I will argue that the perceived incompatibility stems from a subtle equivocation in the notion of “permissibility” occurring between (1) and (2).

Identifying Impermissible Beliefs

If the evidentialist labels a potential belief as impermissible, she is claiming one of two things. Either that the belief lacks sufficient evidence, or that it just isn’t the kind of mental state that can be subjected to epistemic evaluation. Consider the former claim first. This is just the basic evidentialist idea—beliefs must be proportioned according to one’s evidence. If there is insufficient evidence for p, then believing p will be impermissible. In such circumstances, the potential believer must simply suspend belief. However, the more relevant claim for the ethics of belief is the second claim—that certain mental states should be necessarily excluded from epistemic evaluation. I will argue that this second kind of impermissibility is the type found in (1). But, in (2), McCormick is referring almost exclusively to moral permissibility. And while McCormick may succeed on (2), because of the equivocation, this does not necessarily undermine (1).

Let’s consider a kind of belief McCormick contends lacks sufficient evidence but would nonetheless be permissible to hold: my daughter is not an automaton. Unfortunately, my belief that my daughter is not an automaton “cannot be grounded in evidence” (60). Yet, according to McCormick, “[this belief] is not faulty in any way” (60). So, since this apparent belief is not grounded in evidence, the evidentialist must explain why it would nonetheless be impermissible to hold such an important belief.

The evidentialist could first deny that my belief about my daughter constitutes a belief at all. McCormick, however, rejects this move in chapter 1 and I accept this argument. Thus, the evidentialist must contend with what appears to be a genuine, though evidentially groundless, belief. I think the evidentialist has an out. If this is a genuine belief, then the mere fact that I believe it does not necessarily make it permissible. This is surely not enough to make a belief epistemically appropriate. So, the mere fact that I do believe my daughter is not an automaton does not make it epistemically permissible for me to hold this belief. Thus, whether or not a belief is permissible to hold will depend on there being some independent epistemic criteria for evaluating that belief.

McCormick argues that holding this belief about my daughter would be permissible because it contributes to the overall value of my life. This belief likely allows me to flourish. But this doesn’t seem epistemically helpful. Every belief that contributes to the overall value of my life would then be permissible on this account. The evidentialist would surely insist that McCormick show that this belief could be, in some way, epistemically evaluable. Here, I think, is where the evidentialist will appeal to the second understanding of permissibility discussed above. For a belief to be epistemically evaluable, there must be some epistemic criteria that can be employed to evaluate the belief.

Epistemically Evaluable Beliefs

If a belief is not epistemically evaluable, then even if it may be beneficial to hold for some other reason, it will be nonetheless epistemically impermissible. It could be called a belief, but it would be substantively no different than an emotion. There is no epistemic criteria, for instance, for evaluating the love I have for my daughter. So, even if we could construe that emotion as a belief, this will not make it epistemically evaluable. Thus, if there are no epistemic criteria available to evaluate some particular belief or mental state, then the belief or mental state would simply be out of bounds epistemically. Or simply: impermissible.

McCormick argues that the evidentialist, however, cannot separate epistemic norms from moral norms (chapter 2). The evidentialist, she says, cannot evaluate a belief without linking epistemic value to some other (moral) value. So, the evidentialist cannot simply say that some belief is impermissible on purely epistemic grounds. I have argued, however, that it is possible for the evidentialist to determine what is and is not potentially evaluable. Namely, if there are no independent epistemic criteria for evaluating a belief, then the belief will necessarily be epistemically impermissible. The evidentialist, then, may admit that McCormick is correct about the beliefs that are actually evaluated by the evidentialist. Those beliefs that evidentialism deems appropriate may be appropriate in part because of their furtherance of some other and greater (moral) value. Nonetheless, the evidentialist may still be able to deny that some beliefs (and mental states) are open to epistemic evaluation.

In circumstances in which there is no evidence that points in favor of any particular belief, there can no basis for an epistemic evaluation. Thus, the evidentialist recommends suspending belief in such circumstances. This, however, is not an epistemic evaluation itself. It is a predetermination as to whether the belief in question can actually be subjected to epistemic evaluation, and so is not then subject to McCormick’s attack. To determine whether or not a non-evidentially based belief can be epistemically evaluated is just to ask whether there are some epistemic criteria that can be employed to evaluate such a belief. McCormick, however, supplies no such criteria.

McCormick does claim that moral evaluation can sometimes overtake or stand in for epistemic evaluation. So, for instance, assume that evidence regarding p and ~p is either neutral or insufficient. Assume, however, that I nonetheless believe p for reasons that make believing p good for me as a person. McCormick argues that I could permissibly believe p because it is good for me as a person. But this is not an epistemic evaluation. This is no different than claiming that it’s permissible for me to love my daughter because it will be good for me to do so. And this is not an epistemic criterion.

Conclusion

I have argued that McCormick has plausibly linked epistemic value to some other and greater (moral) value. I have denied, however, that this necessarily makes it impossible for the evidentialist to determine what kinds of beliefs (or mental states) are epistemically evaluable. To the extent that there are no independent epistemic criteria for evaluating non-evidentially based beliefs, the evidentialist can thus appropriately describe such beliefs as impermissible.

References

McCormick, Miriam. 2015. Believing Against the Evidence: Agency and the Ethics of Belief. New York: Routledge.

  • Avatar

    Miriam Schleifer McCormick

    Reply

    Reply to Nelson

    Nelson’s discussion highlights one of the key questions that I explore both in my book and further work, namely what are and ought to be the criteria of evaluation for belief? A widespread answer to this question among epistemologists is that these criteria are epistemic. When an evidentialist deems a belief impermissible, on this view, what is being appealed to is strictly epistemic evaluation. Some (though not all) agree that there can be a sense in which beliefs can be subject to other kinds of evaluation but these would not be relevant to the epistemologist. Nelson argues that the kinds of cases that I point to as permissible non-evidentially based beliefs may be permissible in some other sense but once we restrict the notion to epistemic permissibility they remain impermissible. This is so, Nelson argues, because from the epistemic perspective a belief is impermissible when the belief either lacks sufficient evidence, or “it just isn’t the kind of mental state that can be subjected to epistemic evaluation.” I am thus charged with equivocation because, according to Nelson, when I say that some non-evidentially based beliefs are permissible I mean morally impermissible, but this does not address the kind of impermissibility which concerns the evidentialist.

    One of the recurring themes in my book is that maintaining this kind of autonomous domain of the epistemic is deeply problematic. If we think of epistemic appraisals as having normative force where it is appropriate to reproach those who violate those standards then the epistemic value needs to be grounded in pragmatic value. While one can carve out a particular critical domain in all kinds of areas where the values or norms are specific to that area, epistemic norms are supposed to be different from such local norms; they are taken to have a normative force that applies to rational agents in a way that sartorial or gustatory norms do not. I argue that the only way to make sense of epistemic value which can ground any substantial normative claims is to ground it in individual and collective flourishing, in the practical and the moral. Thinking about why we value truth and knowledge reveals that the norms guiding us in the epistemic realm are not isolated from other normative domains.

    Nelson acknowledges this argument, but argues that even if I am correct that those beliefs “that evidentialism deems appropriate may be appropriate in part because of their furtherance of some other and greater (moral) value” that “the evidentialist may still be able to deny that some beliefs (and mental states) are open to epistemic evaluation.” Nelson argues that the cases I consider of permissible beliefs that lack evidential support are excluded from epistemic evaluation because there are no epistemic criteria that can be employed to evaluate them. And, Nelson claims, if a belief is not open to such evaluation then it is impermissible.

    I have two concerns about this argument. First, most evidentialists would deny that “in circumstances in which there is no evidence that points in favor of any particular belief, there can [be] no basis for an epistemic evaluation.” Rather, the evidentialist claims that the appropriate epistemic stance to take in such circumstances is suspension or withholding. Nelson acknowledges that the evidentialist recommends suspending belief in such circumstances but says this “is not an epistemic evaluation in itself.” I am not sure what other kind of evaluation it could be. Here is Richard Feldman’s summary of the evidentialist view: “If a person is going to adopt any attitude toward a proposition, then that person ought to believe it if his current evidence supports it, disbelieve it if his current evidence is against it, and suspend judgment about it if his evidence is neutral (or close to neutral).” He is clear that from a prudential or moral perspective another doxastic attitude may be warranted. But from the supposed epistemic perspective, if one holds any attitude other than suspension, one is doing something epistemically evaluable and impermissible.

    This discussion about the doxastic attitude that one should adopt when the evidence is absent or neutral also speaks to another point Nelson brings up. He notes that I seem to be arguing “not that it is permissible to believe against the evidence, but rather, that it can be permissible to believe without evidence.” I think there is something to this and why I was reluctant to have the title be what it is. I acknowledge in the preface of the book that Beri Marùsíc convinced me that I could not maintain a distinction of permissible belief without evidence and permissible belief against evidence. And this is because, if one believes rather than suspends in the cases I consider, one is believing in a way that is opposed to what the evidence dictates. For the evidence dictates suspension.

    My second worry about the argument which claims that the kind of impermissibility of main concern to the evidentialist is the kind which claims that the mental state is not subject to epistemic evaluation is that this seems like a really odd kind of impermissibility. If something is not subject to a kind of evaluation would you say it was impermissible? If I choose to wear a red shirt instead of a blue one and this act is clearly not morally evaluable (though perhaps it is aesthetically) would one say it is morally impermissible? If anything, the opposite holds; when the criteria are inapplicable then the action (or state) is permissible.

    Finally, Nelson suggests that some of the permissible “practical” beliefs I consider seem a lot like emotions. The belief that my daughter is not an automaton is “substantively no different than an emotion.” And again, emotions, he says, are not appropriate targets of epistemic evaluation. In the book I gesture to the idea that beliefs should be thought of as more like emotions than most philosophers seem to think, beginning my book with this quotation from Hume: “Belief is more properly an act of the sensitive, than of the cogitative part of our natures.” I am now working on clarifying and expanding this idea, showing that on a prevalent way of theorizing about emotions, where they are thought of as having both a representational/cognitive aspect as well as phenomenal/feeling aspect, beliefs are emotions. But emotions are not immune from epistemic evaluation. If your fear or anger is based on misrepresenting the facts or faulty inferences, then it would be appropriate to criticize you for having inappropriate emotions, and this would be on epistemic grounds.

    • Avatar

      Dustin Nelson

      Reply

      Just a quick follow up….

      Hi, Miriam. Thanks for the reply!

      I just have a couple of quick thoughts, and maybe we can go from there….

      1.
      First, let me see if I can clarify my thoughts on the permissibility issue.

      Here’s what I’m thinking. Imagine that it’s possible to induce me to have the thought that squares have five sides, using some kind of brain stimulus in a psychology lab. In a sense, regarding this particular mental state, I’m like a brain in a vat.

      To me, the thought that squares have five sides now resembles a belief. I would say, “I believe squares have five sides.”

      But, lets imagine, if the stimulus is removed, I will not have this weird thought about squares.

      For those in the lab, they could reasonably say that my thought about squares having five sides is impermissible *as a belief*. This is so not because such a belief would go against the evidence, but because it just isn’t the kind of thing that is epistemically evaluable.

      My mental state is not a belief. And shouldn’t be treated as one.

      If pragmatic considerations, for an otherwise epistemically neutral proposition, give rise to mental states that resemble beliefs, then these mental states may simply be no different than my induced mental state that squares have five sides. Just not something that is epistemically evaluable.

      [Sidebar – it seems almost like I’m getting at the idea that the definition of a belief necessarily requires it to be epistemic. Would it be all that bad if the kinds of beliefs you want to talk about are just a different species of belief?]

      2.
      I worry that you’re being a little too flip with the sweater analogy.

      I’m not sure why the color of one’s shirt isn’t morally evaluable.

      Imagine that you know if you wear read outside, a shirt-color-detecting camera will spot your read shirt, and this will cause a bomb to be dropped on a village somewhere else in the world.

      Surely, the color of your shirt would matter.

      I think what you’re getting at is that you are assuming that our starting assumption for morality is that our actions are permissible unless shown otherwise.

      But I’m not sure if beliefs are like this.

      It’s not clear to me that we should begin with the assumption that it’s ok to believe whatever we want unless we’re shown evidence to the contrary.

    • Avatar

      Dustin Nelson

      Reply

      A More Helpful Reply….. Maybe

      I think I might have a better example.

      Imagine that my daughter says this: “I believe that blue is the tallest color.”

      Assume that this is a bona fide belief.

      But, it’s not epistemically evaluable.

      It’s not as if she should suspend judgment as to whether blue is the tallest color.

      Rather, the belief that blue is the tallest color just isn’t epistemically evaluable. Therefore, impermissible to hold as a belief.

      I’m wondering what would be so wrong about thinking of pragmatic beliefs in much the same way as this. After all, in the cases you endorse, there is no epistemic evidence in favor of the proposition.

    • Avatar

      Miriam Schleifer McCormick

      Reply

      Species of belief: A response to the Sidebar comment (and to some of what Trevor says below)

      Thanks Dustin. In presenting my work, I would often be faced with questions like this in the Q&A. If a mental state is not one based on “evidential” or “epistemic” reasons then it is then, some think, thereby not a belief but some other kind of species of “holding true” perhaps more like a hope or a practical acceptance of a certain kind. Berislav Marušić has argued that for a very specific type of belief, in cases of promising and resolving, when something is true is up to me then they can be formed and evaluated in light of practical reasons. These practical beliefs are very different kinds of beliefs than theoretical beliefs. He is inclined to think of them as two species under the genus of “belief” because they both involve a commitment to, and representation of truth, and share a similar functional profile. But, he sometimes seems to not worry too much about whether they get classified as beliefs or as distinct attitudes I have been a lot less sanguine about issues of classification. I think it is important to recognize that “beliefs” are not simply outputs resulting from information processors, and there are costs to excluding attitudes from the belief category that do not fit neatly into the evidentialist’s narrow view of them. But I also don’t think that beliefs can, or should, be as neatly divided into different kinds the way that Beri wants to divide them. I think that there is much more overlap and intertwining between the kinds of considerations that bear on the forming and assessing of beliefs. Beliefs, in general, can be formed and evaluated according to both evidential and practical considerations. Exactly how and when different considerations apply will depend on the circumstances, much the way both apply when evaluating emotions.
      But one may wonder why I care so much about taxonomy? And why not allow that there is clear type of belief (call it “theoretical”) which corresponds to what many evidentialists think of as being the only attitude which is properly categorized as a belief, namely one that can only be formed and assessed according to evidential considerations? I have a number of reasons for resisting this way of carving up the mental landscape. I will talk about three.
      First, it allows epistemologists to dismiss those other beliefs, or belief-like attitudes, explaining away problem cases by saying that these are referring to another kind of attitude, one that is not of our concern. One motivation for such a restriction could be the idea that such beliefs are the only kinds that can be knowledge. Ernest Sosa, for example distinguishes between beliefs that are epistemic and those that are not. An epistemic performance is deemed apt, for Sosa, when one believes truly because one manifests epistemic virtue or competence. Sosa allows that one can be motivated to believe for practical reasons and that, all things considered, it can be rational to do so. The idea is that epistemology is interested epistemic rationality and so that explains why the focus is kept on a particular subset of beliefs.
      There is nothing wrong with focusing inquiry on a particular category of belief and asking some very specific questions about that category. One may, for example, focus inquiry on testimony-based belief, or moral beliefs, or perceptual beliefs. But what Sosa is suggesting is different, and it is less clear what species of belief he has in mind when he refers to some beliefs as epistemic and some as not. Why would a belief that was based partly on practical or moral reasons cease to be epistemic? If one came to view such a belief as false one would cease to believe it and, if it were false and one thought it true, it would still be a faulty belief.
      Second, most theorists view epistemology as normative. But the more narrow the focus, the less clear becomes the sense in which it is normative. Normative ethics focuses on questions of how we ought to act, and the scope of actions with which it is concerned is very broad. Analogously, if epistemology is normative it should focus on questions of how we ought to believe, where the scope of beliefs with which it is concerned is also very broad. One may reply that it is still normative, focusing on the particular normative category namely the “epistemic.” But the idea there is an autonomous substantially normative category of the “epistemic,” is as we have seen, something I question.
      Finally, I think that if one does not allow room for practical reasons for belief (and not some other attitude) this problematizes doxastic agency. When I am trying to decide whether to believe something, I do not clearly adopt one of the perspectives when deliberating (the decider, the truster, the theoretician), and it does not seem that the only beliefs over which I exercise agency are the ones where I adopt the “agent’s” perspective on my decisions and resolutions.

    • Avatar

      Miriam Schleifer McCormick

      Reply

      A quick reponse

      It is interesting to try to articulate the difference between a belief that is incoherent and one that is based on non-evidential reasons. There is something clearly wrong with someone’s belief that squares have five sides or that blue is the tallest color. They have failed to grasp the concept of “square” or “blue,” and – at least in your daughter’s case, they can be taught and can see where they have gone wrong. Beliefs that are partly based on practical reasons are not like this. They are coherent, and often true and they are based on reasons. They can be assessed and evaluated, and corrected but in a very different manner than the blue belief your daughter has.

Avatar

Response

Why Do We Value True Beliefs?

In chapter 2 of Believing Against the Evidence, Miriam McCormick argues that truth does not have non-instrumental value. In other words, she argues that truth is not valuable for its own sake: it is only valuable insofar as it leads to some other good. She ultimately argues that the value of a true belief is tied to its practical benefits: “If we lived in a world where true beliefs had no benefits, then, in my view, a proposition being true would not count at all in favor of its being believed” (45).1 She views the value of truth as “tied to what believers value” and denies that truth is valuable when it serves no further purpose (45). In this short commentary, I press McCormick on this view and consider the implications for her account if in fact truth does have non-instrumental value.

My objection to McCormick’s position is a variation of Michael Lynch’s (2005) claim that we cannot properly explain how we care about truth if we appeal only to its instrumental value. Lynch’s notion about truth is that a proper caring about it is something analogous to how we might care about our children. We care about our children for their own sakes and not because of how they help us attain some other good; their value is not dependent on any other value. McCormick responds to Lynch by highlighting the way in which other things we value—self-respect, integrity, authenticity, happiness, and so on—are usually invoked to explain the value of truth. As she rightly points out, the value of truth appears to be instrumental if its value stands or falls with these other values (46–47). After all, if this picture is accurate, then the epistemic value of truth is dependent on non-epistemic value.

I believe, however, that there is a way to capture Lynch’s underlying idea that does not appeal to non-epistemic values. My argument uses the method of reflective equilibrium.2 This method involves trying to explain our considered judgments—those convictions that survive sustained critical reflection under conditions conducive to good reasoning (e.g., no manipulation, an absence of social biases)—in terms of principles that can be unified into a coherent system. A state of perfect coherence among all our theoretical principles and considered judgments is the ideal equilibrium at which the method aims. The argument begins with a considered judgment that I believe virtually everyone shares: truth is valuable.

We then have to determine which account of truth’s value best explains this judgment: an instrumental account or a non-instrumental one. McCormick stresses that true beliefs often have high instrumental value in our daily experiences. Following Kornblith (1993, 2002), she suggests that true beliefs usually have considerable value because they contribute to our ability to achieve our goals, whatever those happen to be. She also expresses skepticism about the value of trivial, instrumentally useless true beliefs (43–44). But I am not sure her points on these matters are ultimately convincing.

We certainly need some true beliefs to pursue our goals, but I am not sure that truth is really so central to their fulfillment as McCormick would suggest. Many people live successful and fulfilling lives despite having worldviews grounded in substantially false belief systems. As one illustration, consider the vast diversity of religious belief. Since religious beliefs vary so much and there is no consensus about which of these worldviews (if any) is correct, it follows that the majority of people have incorrect religious beliefs. This same observation can be made about many other complex belief systems concerning ethical and political matters. Religious, moral, and political beliefs can radically alter the life plans that people pursue. The problem is that flourishing lives take many forms, and it is clearly possible for some people to flourish even when the worldviews that they have based their lives upon are dominantly false. People are also prone to various behaviors, such as self-deception or rationalization, in which they avoid confronting the truth to avoid psychological distress. In these ways, it seems that truth is not as tightly connected to human flourishing as McCormick supposes. Thus, I worry that the value of true beliefs will prove to be much more fragile than McCormick believes.

Additionally, I do not share McCormick’s judgment about seemingly trivial true beliefs. I lean toward the view that all true beliefs are prima facie good to hold but that they may not be good to hold all things considered. There can be cases where the instrumental value of a belief is negative and overrides this non-instrumental goodness (e.g., having true beliefs about how to make nuclear weapons), and there can be cases where the costs of acquiring a true belief are so high that trying to learn the truth is not prudent. But McCormick rejects this picture.

One of her crucial points against the prima facie goodness of true beliefs concerns an analogy between morally good actions and true beliefs. Many morally good actions have prima facie value but are not what one has most reason to do all things considered. As suggested in the prior paragraph, many wish to say something similar regarding true beliefs. McCormick argues that this is mistaken:

I think this analogy, rather than supporting the view that even the most trivial true beliefs have value, actually highlights problems with it. If I had superhuman powers and could help anyone who would benefit from crossing the street without sacrificing other things of importance, I would do it. If the same powers allowed me to acquire beliefs about numbers of threads in carpets, or grains of sand on the beach, it is not at all clear that I would be motivated to employ my powers in such a way. Why is it even prima facie good to believe the most trivial truth? (43–44)

I will admit that McCormick’s remarks here are persuasive at first glance. When a true belief does not clearly contribute to some other good, it seems reasonable to question why it would be even prima facie good to hold that belief. But McCormick’s verdict also follows too quickly. One complication with beliefs is that our brains have a limited cognitive capacity. We might worry that filling our minds with true beliefs that have no practical value will make it more difficult for us to retain those true beliefs that do have practical value. In our day-to-day lives, we usually have to focus on collecting true beliefs that are of practical use to us. But if we suppose in the thought experiment that we have an extraordinarily robust cognitive capacity, I do not share the intuition that there would be nothing gained by using superpowers to acquire these so-called trivial beliefs.

Nevertheless, the question that McCormick poses at the end of her thought experiment needs an answer. What exactly is it that explains why true beliefs have non-instrumental value? McCormick does a commendable job showing that explanations appealing to human flourishing are unlikely to succeed. My alternative proposal is that their non-instrumental value is tied to true beliefs accurately reflecting reality. I believe that most human beings have a strong desire for their understanding of the world to accurately reflect the way that the world actually is. Consider some well-known skeptical hypotheses: my perceptions are all a result of systematic deception by an evil demon, I am actually dreaming and merely imagining all my supposedly real experiences, or I am actually a brain in a vat being fed sense data by an evil scientist to create the illusion of other experiences.3 Were any of these scenarios true, I think it fair to say that it would cause someone a great deal of distress to learn this truth, and it may well not be possible to do anything to change one’s circumstances. But even when it would hinder someone’s flourishing to learn the truth in these scenarios, it does not strike me as obvious that people will generally want to avoid the truth in these scenarios. They might ultimately reason to that conclusion, but I doubt that everyone will have a straightforward intuition that they should not care at all about learning the truth in these scenarios.

The mere fact that some deliberation seems necessary to conclude that we should shun the truth in extreme skeptical scenarios indicates that there must be something about truth that we value beyond its ability to promote our flourishing or make us better off. In these scenarios, learning the truth may impede those goals. So why might we still want to know that our perceptions and beliefs are so thoroughly misguided? The only explanation I can offer is that we think it good for its own sake that we have an accurate perception of the way the world is: even when having an accurate perception of reality makes us unhappy or otherwise hinders our ability to live well, we still recognize that there is something good in having an accurate picture of the world around us. If the value of truth is only instrumental, then it should be clear that the truth ought to be avoided when it lacks instrumental usefulness. But we are often drawn to pursuing the truth even when we believe that what we are likely to learn will make our lives worse. Since there are scenarios where people are still compelled to seek the truth even when true beliefs lack instrumental usefulness, truth must have at least a little non-instrumental value. Or at least, this explanation appears to better capture our considered judgments about pursuing the truth than a merely instrumental account.

Suppose that this non-instrumental account of truth’s value is correct. What follows for McCormick’s view? In some respects, it might not have a huge impact on her position. She describes her anti-evidentialism as “modest” and claims, “Most of the time, if one believes against the evidence, one is doing something wrong” (129). I am inclined to think that exceptions to evidentialism will be rarer on this non-instrumental account than on McCormick’s because the non-instrumental account implies that some value is always lost when one endorses a false belief rather than a true one. But without greater specificity on what McCormick means by “most of the time,” the relative degree to which exceptions to evidentialism are permitted on each account is difficult to determine.

A bigger implication of endorsing a non-instrumental account would be that part of truth’s value is uniquely epistemic in nature. As a result, it would be more plausible for at least some of the norms governing what we ought to believe to be solely epistemic in nature, which would make it more difficult for McCormick to collapse epistemic norms into moral and prudential norms. This result could prove quite significant since a central task in other chapters of Believing Against the Evidence is defending the claim that the ethics of belief is not different from ethics in general—that the same considerations that guide our actions should likewise guide what we should believe and how we should approach epistemic inquiry. For this reason, I believe McCormick needs to provide a stronger argument against the non-instrumental account of truth’s value if she wishes to maintain her overall position.

References

Descartes, René. 1641 [1996]. Meditations on First Philosophy: With Selections from the Objections and Replies. Translated by John Cottingham. Rev. ed. Cambridge: Cambridge University Press.

Goodman, Nelson. 1955. Fact, Fiction, and Forecast. Cambridge: Harvard University Press.

Kornblith, Hilary. 1993. “Epistemic Normativity.” Synthese 94, no. 3: 357–76.

Kornblith, Hilary. 2002. Knowledge and Its Place in Nature. Oxford: Oxford University Press.

Lynch, Michael. 2005. True to Life: Why Truth Matters. Cambridge: MIT Press.

McCormick, Miriam. 2015. Believing Against the Evidence. New York: Routledge.

Nozick, Robert. 1974. Anarchy, State, and Utopia. New York: Basic Books.

Putnam, Hilary. 1981. Reason, Truth, and History. Cambridge: Cambridge University Press.

Rawls, John. 1999. A Theory of Justice: Revised Edition. Cambridge, MA: Belknap of Harvard University Press.

 


  1. Unless otherwise specified, page numbers refer to those in Miriam McCormick’s (2015) Believing Against the Evidence.

  2. Nelson Goodman (1955, 65–68) appears to be the first philosopher to explicitly describe and endorse this method, though Goodman employed it as a means of justifying principles of deductive and inductive inferences. John Rawls (1999, 18–19, 42–45), however, is responsible for popularizing the term.

  3. The evil demon and dream hypotheses originate with Descartes (1641). The most well-known presentation of the brain-in-a-vat hypothesis likely comes from Putnam (1981, 5–6).

  • Avatar

    Miriam Schleifer McCormick

    Reply

    Reply to Trevor Hedberg

    Given that I have argued that the value of truth is inextricable from practical value, the question arises why people pursue and value the truth even when it would not serve their interests, or indeed even harm them? Why would Neo take the red pill? Hedberg argues that to make sense of why I would want to know, for example, that “my perceptions are all a result of systematic deception by an evil demon . . . or I am actually a brain in a vat being fed sense data by an evil scientist to create the illusion of other experiences,” I need to hold that true beliefs must have some non-instrumental value: “Were any of these scenarios true, I think it fair to say that it would cause someone a great deal of distress to learn this truth, and it may well not be possible to do anything to change one’s circumstances. But even when it would hinder someone’s flourishing to learn the truth in these scenarios, it does not strike me as obvious that people will generally want to avoid the truth in these scenarios.”

    I agree that is so and say the following, in chapter 2, the chapter which is the focus of Hedberg’s critique: “If we lived in a world where we were radically deceived by Descartes’s evil genius or by the agents of the Matrix, almost all our beliefs would be false. But such a world does not rule out that if one could come to have a true belief that it would possess value. Some would take this to support the view that true beliefs have some non-instrumental valuable. But the reasons why someone in the Matrix world would want to take the red pill, or why, in general, one may want to possess painful knowledge rather than remain in blissful ignorance may well be reasons grounded in prudential and moral values” (45).

    It is important to remember that when I talk of “practical” or “non-evidential” reasons and values, these go beyond the narrowly pragmatic. Some of what I say in reply to Sullivan-Bissett applies here as well. Our “interest” in truth does not translate into all individual truths serving one’s self-interest. Rather, it is in the interest of the human species that we have cognitive systems that generally produce true beliefs. Being able to correctly distinguish between berries that nourish and berries that poison matters very much. Hedberg is quite right that “most human beings have a strong desire for their understanding of the world to accurately reflect the way that world actually is,” for having such an accurate reflection, in the first instance, helps us survive. But once basic needs of survival are taken care of, our truth-seeking tendencies can turn to searching for deeper understanding that can also help make our lives better, richer, more meaningful.

    Why would someone want to learn the truth if they were in a skeptical scenario? Must it be, as Hedberg contends, “that we think it is good for its own sake that we have an accurate perception of the way the world is”? The truths we would learn in such situations are non-trivial; coming to understand the true nature of reality and the true nature of ourselves is something deeply intertwined with helping us flourish and be excellent human beings. If we think about the competing metaphysical pictures of reality one finds in the early modern period, one may wonder why it matters which is correct. Whether there is one substance or many makes no difference in how I go about my life. Caring about the truth in such a rarified domain, one might think, must speak to truth’s intrinsic value. But the proponents of these systems thought that coming to see the truth would have deep and important consequences. Spinoza discusses the “practical advantages” that accrue from the knowledge of his system. He thinks it will lead to contentment, peace and tolerance. It may well be that the truth-seeking impulse begins as means to self-preservation but then develops into a quest for self-understanding. While this would still mean that the value of truth is dependent on another value, it is not just a narrowly means-end relationship.

    As an objection to my view, Hedberg claims the following: “Many people live successful and fulfilling lives despite having worldviews grounded in substantially false belief systems. . . . Religious, moral, and political beliefs can radically alter the life plans that people pursue. The problem is that flourishing lives take many forms, and it is clearly possible for some people to flourish even when the worldviews that they have based their lives upon are dominantly false.”

    This is an empirical question but it strikes me as highly implausible that one can have a successful and fulfilling life if most of one’s beliefs were false. Most of our true beliefs are mundane and unnoticed; if I step on the brakes my car will stop, if I eat breakfast I will stop being hungry etc. People’s political and religious beliefs can radically differ, but on these matters of fact they will agree. Now when it comes to moral beliefs the situation is more complicated. If one takes it that moral judgments can be true or false, then a plausible case can be made for the idea that one cannot live a successful and flourishing life with mostly false moral beliefs. For whatever your preferred moral theory, the rightness or wrongness of an action is tied to a value conducive to flourishing—either of humanity or rational agency. Living a life with mostly false moral beliefs will, almost by definition, not be a good one.

    Hedberg claims that a non-instrumental account of truth’s value better captures the considered judgment “truth is valuable” than the instrumental one I offer. I am not convinced that the cases he offers do not fit into the account of truth’s value that I have offered. As I say in my response to Nelson, by trying epistemic value to the practical, broadly construed, we can make sense of why epistemic norms have the force that they do. I have still not seen an adequate defense of the idea that an autonomous epistemic domain that can retain the kind of normative force that most theorists think epistemic norms possess.

    • Avatar

      Trevor Hedberg

      Reply

      How Do We Count Beliefs?

      Having reviewed Dr. McCormick’s response, it occurs to me that one source of disagreement between us may hinge on how we count the beliefs that a person holds. In particular, do implicit associations — the result of custom and habit — count as beliefs?

      One of the claims in my initial response is that people can live flourishing lives even when most of their beliefs are false. I suspect whether this claim is true will hinge considerably on what we count as a belief. Miriam responds by claiming that “Most of our true beliefs are mundane and unnoticed; if I step on the brakes my car will stop, if I eat breakfast I will stop being hungry etc.” Here arises an interesting question: are all these kinds of things really instances of beliefs?

      Most of our repeated behaviors result from associations that are made unconsciously. When we do things without conscious reflection, then there is a strong case that these shouldn’t be counted as beliefs. Believing something generally required some extent of conscious endorsement — doing something repeatedly without conscious reflection does not always indicate that a person really believes something. Granted, we can bring some of these associations to the forefront of our thinking and reflect on them. Then I might say, “Yes, I believe that if I press the break down, my car will gradually stop.” But this is not the normal way of thinking about the matter, and our brains are hardwired to process most of these mundane activities without us ever being conscious of it. Thus, I’m not sure every single one of these instances should count as a belief: in many cases, the conscious reflection required to form a belief plays no role in behaving in the way conducive to our survival. (This isn’t unique to people: many animals, such as crabs or fish, behave in survival-conducive ways despite having very limited cognitive capacities. They can’t even form beliefs but still manage to stay alive.)

      Worldviews, in contrast, are usually produced in large part through conscious deliberation, and as I’ve noted, they can be radically false. A sufficiently complex worldview can contain an awful lot of false beliefs. And given the diversity of worldviews, many of them must be primarily composed of false beliefs.

      This doesn’t amount to a full defense of my claim that people can live flourishing lives even when they have more false beliefs than true beliefs, but I hope it provides some reason to think that false beliefs are more prevalent than Miriam suggests in her reply.

    • Avatar

      Trevor Hedberg

      Reply

      How Common Is a Flourishing Life?

      Miriam, I am curious how common you think it is for a person to live a flourishing life. Your remarks in the second-to-last paragraph suggest to me that you would think that virtually no one meets this standard. Peer disagreement alone ensures that the majority of philosophers have a large number of false moral beliefs and that some of them have radically false moral beliefs. And for most people, their moral worldviews will be a hopeless melding of inconsistent, often false beliefs. So if you think it’s impossible for a person with mostly false moral believes to live a flourishing life, then I think the vast majority of people are out of luck.

      I would think the standards are more minimal. There may be certain views — about the wrongness of killing, lying, stealing, etc. — that one really must hold. But having, say, the right view about abortion, the moral status of animals, or the moral permissibility of restricting immigration? I don’t think being wrong about these matters or the myriad of similarly complex moral issues we regularly confront would eliminate a person from living a flourishing life.

    • Avatar

      Miriam Schleifer McCormick

      Reply

      Do beliefs require conscious endorsement?

      Thanks Trevor. In a response to Dustin, I talked a little bit about what counts as a belief, and I tend to be very inclusive in my classification. It’s interesting that the kinds of “holding true” attitudes that tend to get excluded from the belief family are those that are unresponsive to evidence like delusions, or religious beliefs, not these mundane background beliefs. One important aspect of what a belief is that it has a particular functional profile that includes dispositions to behave as well as dispositions to endorse. If beliefs required “conscious endorsement” this would seem like only occurent beliefs count as actual beliefs; this is a very radical view. It would seem then that I don’t really believe 2+2=4 unless I am thinking about it. The belief that if I press on the brakes, my car will stop is actually a pretty complicated inference that needed to be learned, not such an automatic association

    • Avatar

      Miriam Schleifer McCormick

      Reply

      Degrees of Flourishing?

      I am really not sure what I think about this. If full flourishing requires excellence in all aspects of agency then I imagine it is very rare. I am inclined to think it comes in degrees but I think I would like to stick by my claim (though I am not sure I will very get around to defending it) that “one cannot live a…flourishing life with mostly false moral beliefs.” Though perhaps it is better to say the more false moral beliefs you have the farther you are away from full flourishing, that having such false beliefs detracts from your flourishing. If one has the wrong view about abortion, the moral status of animals, or the moral permissibility of restricting immigration than this can well result in doing a lot of harm or failing to live up to one’s capacities as a rational agent.

Avatar

Response

Evidence and Emotion

The governing project within Believing Against the Evidence: Agency and the Ethics of Belief is to give an account for the ways in which individuals develop beliefs that are contrary to evidence. What distinguishes McCormick’s evidentialism from other familiar brands is the foundation of a more pragmatic approach to epistemic concerns. Rather than launching a theory of evidentialism from epistemic ideals or notions of doxastic virtues, McCormick proposes that we begin our inquiry into the ethics of belief with a pragmatic methodology. The utilization of pragmatism, for McCormick, proposes “the opposed view that some non-evidentially based beliefs are permissible and that doxastic norms are not wholly evidential” (1). The approach suggests that we must turn to our doxastic practices in order to more fully grasp the nature of belief and doxastic responsibility. This is a project that not only examines doxastic practices and responsibilities, but it also blurs the lines between ethics and epistemology. McCormick argues strongly for an examination of believing that is not bereft of action or ethical concerns.

While I agree with McCormick’s conclusions that inquiries into our doxastic practices should not and cannot merely operate on an epistemic level and “we cannot believe at will, but we still maintain some activeness within the beliefs that we form,” I am extremely leery with the steps taken to arrive at these conclusions (8–9). The primary focus of this response is in regards to the foundational role of a pragmatic methodology shaping our doxastic responsibilities. When is an individual considered a member within a community which renders them subject to particular doxastic norms? Does one have to recognize themselves as operating within such a space? Must the community at-large identify the individual agent as a member of the community?

The majority of the cases I have in mind are instances involving Trump supporters who view political issues such as Black Lives Matter, Trans Rights, Reproductive Rights, or Immigration Rights as mere ramblings of “political correctness liberals.”[1] That is to say, what I perceive as evidence to support some of these issues, other individuals view the same content to be emoting. I believe such cases of Trump supporters are of the relevant kind to McCormick’s concerns for this book.[2] The views of these supporters are presented as facts—statements of truth pertaining to our external world, some of which offer up “evidence,” be it climate change, immigration, or evolution. It seems that we want to hold them responsible for holding non-evidential beliefs. That believing that climate change is merely change of weather—humans play no role, or “immigrants are stealing American jobs,” or “evolution is an elaborate ruse orchestrated by the devil” is believing badly. McCormick does not cache out an answer to the question “What ought one believe?” in a strictly evidential manner; rather, she relies on pragmatic notions to develop an account of doxastic agency and doxastic responsibility. So in order to determine how we can or if we should hold them responsible, we need to first turn to our doxastic practices already in place. But how do we determine who is in and who is not within our community? If we rely on pragmatic approaches, that is looking to see what the community does, then I believe many problems will arise. My concern is that epistemic oppression will occur, moreover, the people we hope to hold accountable will not be found responsible in a doxastic sense, because “that’s how our society now plays the game” or “they weren’t aware.”

For pragmatist Robert Brandom, in order to qualify as a player, you have to be perceived as a player. If you are not perceived to be playing the game of giving and asking for reasons, then nothing you communicate will be intelligible to an attributor. Members within your community must see you as a member. For Brandom, you are either in or you are out. When I make an assertion and it is recognized by another as an assertion (they grant me entitlement and conceive of me as making a commitment), then I am implicitly being authorized as a “knower.” Brandom states, “When we grant another individual to a commitment and to be entitled to that commitment then we ‘treat the sort of claim involved in asserting as an implicit knowledge claim’” (MIE, 200, emphasis in original). McCormick runs a similar line; however, with a self-reflective Strawsonian twist—i.e., “as seeing an agent as responsible if he is an apt candidate for a reactive attitude” (87n1). McCormick states, “By viewing oneself as an appropriate target for the consequence of a particular mechanism (say, ordinary practical reasoning), one thereby takes responsibility for it and the behavior resulting from it” (112, emphasis my own). When one sees themselves as an “appropriate target” or a particular member within the community that has certain norms of reasoning and reason giving, then one takes up the mantle of responsibility within doxastic practices. McCormick describes such as taking ownership, which “extends to future operations of the mechanism” (112). But what happens when the Trump supporter (1) does not see me as an equal player in the game of giving and asking for reasons; therefore, does not see my evidence qua evidence, or (2) does not see themselves as an appropriate target for particular doxastic responsibilities (or even my reactive attitudes for that matter)?

A quick response to these concerns would be to highlight that one need not view themselves as appropriate targets consciously, that “the ways we react to others and feel about ourselves reveal whether we have taken responsibility for the mechanism in question” (113). An individual’s facial expressions, tone, or body language could help to indicate that they view themselves as appropriate targets for the social rules regarding doxastic responsibility regardless to whether or not they knowingly see themselves as beholden to those rules. If they do not see themselves as responsible on either an intellectual or affective level, then blame becomes very difficult to assign. McCormick states,

[EXT]That some notion of control is in play when assigning blame to beliefs is reinforced if we consider when and why we mitigate such blame. If you cannot make your higher order judgments effective about how you ought to believe, there is a sense in which your belief is no longer your own; you are divided and over powered. I would blame you less if you really are compelled to believe against your better judgment. (114)[/EXT]

If we use the example of a racist Trump supporter, it is very plausible that the only frame of reference they have is one that is laden with racial ignorance.

As Strawson notes, we tend to attribute ignorance to those who “didn’t mean to,” “didn’t realize,” “didn’t know,” “weren’t aware,” “couldn’t help it,” or “had no alternative frame of reference” (7–8). How can we engage with racist agents who find the communication of and reasons for particular evidence intelligible? If we are to take up these excusing conditions, then we are unable to account for such individuals, and these seem to be the very individuals that we hope to be able to blame for believing badly. But McCormick argues that we are more likely to blame them less if “the belief isn’t really their own.”

One could say that such an ignorance is actively willful, which would not fall under McCormick’s exceptions. That it involves a hermeneutical lack; however, it is not a conceptual gap in the sense in which Miranda Fricker describes where the conceptual tools are lacking within a society at large for individuals to accurately understand particular wrongdoings (1). Gaile Pohlhaus Jr. offers an account of willful hermeneutical ignorance where “marginally situated knowers actively resist epistemic domination through interaction with other resistant knowers, which dominantly situated knowers nonetheless continue to misunderstand and misinterpret the world” (716). Resources are made available by the marginalized groups; however, dominantly situated knowers still actively possess and maintain hermeneutical gaps. However, I am not so sure that individuals are actively being ignorant within their meta frameworks. I believe resources such as testimonial experiences, scientific data, and academic theories are at the individual’s disposal and the individual has the capacity to distinguish between varying degrees of evidential justification; however, these resources are not seen as resources. They are either unintelligible or seen as emotive nonsense. Nonetheless, the individual still remains ignorant about a particular epistemic object. José Medina describes this phenomenon as meta-level ignorance which is “produced by meta-attitudes that limit our abilities to identify and correct our ignorance about others” (149). He situates such an ignorance as a passive first-order ignorance, which is the realm McCormick is concerned with regarding doxastic blame.

McCormick contends that if an individual, such as a racist Trump supporter, should consistently find themselves in a position where their beliefs are being shown as incorrect, then they should correct the reasoning that is causing them to believe badly. “If I find this mechanism is regularly leading me astray, something is wrong with me; it is not appropriate for me to insist ‘but these beliefs result from perception over which I lack control and so it is not my fault that I keep forming false beliefs’” (115). But this approach requires that the racist Trump supporter first recognize that they are believing badly as opposed to everyone else just spewing liberal propaganda. As Benjamin Sherman notes, it is not necessarily “obvious that we are equipped to directly identify correct credibility judgments, if our pre-reflective faculties of judgment are biased” (14). McCormick stresses that there is a presumption that we check to make sure our reasoning mechanisms are not going awry; however, this is on the assumption that individuals such as the racist Trump supporter views particular players within their epistemic sphere as reliable knowers. If they are prone to view assertions from Black Lives Matters activists as mere emoting, then they will be unable to see their mechanism as failing.

I am very sympathetic to the overall project within this book. The twisting and fabrication of “facts” within this most recent presidential election cycle should raise several alarms for those of us who work in epistemology. However, I do not believe that a pragmatic driven methodology to work out doxastic responsibility will yield results we find to be satisfying. With a pragmatic approach, the community becomes the barometer of the practices. And within this epistemic climate, I find that troubling. Issues arise when assertions are being made, but for prejudicial reasons, lack of intelligibility, or hermeneutical resources the hearer/attributer does not see the speaker/asserter as making assertions, but is instead taken to merely be expressing, which bears no epistemic capital. Moreover, if we rely on a more second-personal standpoint, or Strawsonian conception of responsibility, then I highly doubt those we hope to hold responsible will in the end see themselves as responsible.

[A]References

Berenstein, Erica, Nick Corasaniti, and Ashley Parker. 2016. “Unfiltered Voices from Donald Trump’s Crowds.” New York Times, August 3, 2016. Video, 3:11. http://home/163979.cloudwaysapps.com/esbfrbwtsm/public_html.nytimes.com/video/us/politics/100000004533191/unfiltered-voices-from-donald-trump-crowds.html?smprod=am&region=img. Accessed December 21, 2016.

Brandom, Robert. 1994. Making It Explicit: Reasoning, Representing, and Discursive Commitment. Harvard University Press: Cambridge.

Fricker, Miranda. 2009. Epistemic Injustice: Power & the Ethics of Knowing. Oxford: Oxford University Press.

McCormick, Miriam Schleifer. 2015. Believing Against the Evidence: Agency and the Ethics of Belief. Routledge: New York.

Medina, José. 2013. The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Oxford University Press: Oxford.

Pohlhaus Jr., Gaile. 2014. “Discerning the Primary Epistemic Harm in Cases of Testimonial Injustice.” Social Epistemology: A journal of Knowledge, Culture and Policy. 28:2, 99–114.

Sherman, Benjamin R. 2015. “There’s No (Testimonial) Justice: Why Pursuit of a Virtue Is Not the Solution to Epistemic Injustice.” Social Epistemology: A Journal of Knowledge, Culture and Policy, online, 1–22.

Strawson, Peter. 1974. “Freedom and Resentment.” In Freedom and Resentment and Other Essays. Methuen: London.

[1] The New York Times featured a video containing unfiltered commentary from Trump supporters at his political rallies. Some of the comments included “The safest place in the world to be is at a Trump rally” and “Muslim is not a religion partner, it is an ideology” (Berenstein, Corasaniti, and Parker, 2016).

[2] McCormick states, “It is not the religious beliefs that I find disturbing or bizarre; these are familiar enough when living in one of the most deeply religious countries in the world. Rather, it is when highly contentious views are presented as facts that I am amazed. Listeners are told there is no evidence at all for human-caused climate change, that there are no similarities between the civil rights movement and the gay rights movement, and that the theory of evolution contradicts basic laws of nature” (xi, emphasis in original).

  • Avatar

    Miriam Schleifer McCormick

    Reply

    Reply to Tempest Henning

    I am very grateful to Tempest Henning for honing in on one of the most difficult challenges for any account of doxastic responsibility that includes some kind of condition of control; who gets excused and why? Who is included in the practice of taking and holding responsibility and how does one get excluded from the community of responsible believers?

    Henning is right that the kinds of concerns she raises about fundamental disagreements, where parties do not even seem to agree on what counts as evidence, are the ones that motivate my project. In the preface to my book, I refer to the bewilderment I felt when listening to talk radio stations where highly contentious views are presented as facts, or about how I was at a loss at how to respond to a student who didn’t believe in evolution.

    The climate we are now in has exacerbated these concerns enormously. When one can look at two photographs with differing crowd sizes but deny that the one with the bigger crowd size is actually bigger, that is when it seems even perceptual evidence does not affect beliefs, we may feel that any chance to engage, discuss, and hold each other accountable has been lost, that there really are distinct conceptual schemes in operation and to even talk about “our” doxastic practices that transcend the local community practices is either idealistic or hegemonic. But, of course, when faced with examples of this kind, we want resources to show that reproach is warranted, that to ignore evidence or to rely on “alternative facts” in forming beliefs is to believe badly.

    Henning takes as her example the “racist Trump supporter” and she is concerned that even if there are “resources such as testimonial experiences, scientific data, and academic theories . . . at the individual’s disposal and the individual has the capacity to distinguish between varying degrees of evidential justification . . . these resources are not seen as resources. They are either unintelligible or seen as emotive nonsense.” Within their framework, they will never come to see their beliefs as false or reasoning as faulty and so my view that we have a responsibility to monitor our reasoning mechanisms and are responsible for the outputs of faulty mechanisms will not apply because such an individual will not see the mechanism as faulty: “If they are prone to view assertions from Black Lives Matters activists as mere emoting, then they will be unable to see their mechanism as failing.”

    The first thing to note about Henning’s concern is that the “community” I take to be the barometer of epistemic practices is the community of believers, not any subset of believers. Taking responsibility is not a single act that one chooses to do or fails to choose to do. The price of failing to take responsibility is high and not one that many people would be willing to incur. In viewing oneself as an agent and as an appropriate participant in the family of reactive attitudes, one thereby takes responsibility. If one does not see oneself in such a way, one would be cut off from most meaningful human relationships; it requires one to relinquish autonomy and to remain a fragmented self.

    What we need to ask, when thinking about whether the racist Trump supporter is responsible, is whether such a racist could or should believe differently. One of the main reasons I have given for keeping a control condition for doxastic responsibility, and why it is preferable to what I have I called “character-based” accounts who eschew the control condition is that those accounts cannot explain when we excuse or mitigate blame. If responsibility is based on character, moral personality, or one’s evaluative commitments, these all seem contingent on factors wholly beyond one’s control, like upbringing and environment, and at a certain point, are extremely difficult to alter. History may matter for the assessment of the kind of person you are but does nothing to mitigate responsibility for the belief you now hold. For the ownership account of responsibility that I endorse, one’s history matters in terms of assessing responsibility.

    To see how and why reproach for the racist Trump supporter is likely justified consider four people who have false, racist beliefs. To make discussion more focused, imagine each of these four people believes the same racist proposition, one which was espoused by many people, even many seemingly enlightened intellectuals, in eighteenth-century United States, that “black people, as a race, are rationally inferior with more limited cognitive capacities, than other races.”

    The first person, Min, is living in a totalitarian regime where information is carefully crafted to indoctrinate the inhabitants with false beliefs, starting at birth. Unfortunately, we do not need science fiction to find such a scenario. Inhabitants of North Korea are systematically indoctrinated with false beliefs such as that their recently deceased leader Kim Jung Il was born on Mt. Paetko, what is considered a holy mountain, and was indeed divine, rather than in the former Soviet Union where he was actually born. Besides such elaborate tales, the history taught is full of fabrication such as that the south invaded the north in 1950. Their access to evidence that would reveal such beliefs false is also carefully limited and monitored. Finally, as in Orwell’s 1984, deviation from the accepted beliefs can result in severe punishment. While it is hard to gauge how many of the falsehoods are actually believed, it seems that the leadership has been more successful than other regimes largely due to isolation. While the racist proposition we are considering would not likely arise for the actual North Korean population, we can imagine a not terribly far off possible world where it would become one. Much of its own population (those imprisoned) is already used as slave labor. Imagine that increased policies of isolation did damage to the economy so that more slave labor was needed to keep the royal palaces from crumbling. Then imagine (this is the far-fetched bit) a nomadic tribe of black people was captured and enslaved. If the experts and authorities then told the population that these people, who would look very different from Koreans, were actually not quite people, but creatures of lesser intelligence, as we tend to perceive non-human animals, we can imagine many people, including Min, would believe it.

    Our second person to consider is not a hypothetical person. It is Thomas Jefferson. He wrote about the races: “Comparing them by their faculties of memory, reason, and imagination, it appears to me, that in memory they are equal to the whites; in reason much inferior, as I think one [black] could scarcely be found capable of tracing and comprehending the investigations of Euclid; and that in imagination they are dull, tasteless, and anomalous.” He provides many reasons for thinking that these differences are inherent in the race, not due to circumstances; he believed that the cognitive capacities of the “black race” were not the same as other races.1

    For our final pair I will use the names Abigail and Bert, echoing Angela Smith’s discussion of a similar pair of cases:

    Abigail is someone living in the present day in the United States (or any other relatively liberal democratic state) but she grew up in an isolated racist community. Smith describes Abigail like this: Abigail developed “evaluative tendencies and corresponding attitudes in line with those she sees operative in her family or surrounding community. As an adult, her attitudes may continue to reflect the vicious evaluative judgments thus formed in her childhood.” Finally, Bert lives not far from Abigail, but he grew up in “a loving and tolerant home and community, but later in life reflectively comes to adopt racist and intolerant values.”

    Any account of doxastic responsibility should be able to distinguish between these cases and the degree of mitigation of blameworthiness should decrease from case 1 to case 4, perhaps a complete excuse in case 1 and full responsibility in case 4. I think part of our tendency to mitigate blame in a case like Abigail’s has to do with our questioning whether her mind is as self-constituted as is Bert’s. And once we get to Min, a full-blown excuse seems appropriate for the same reason. Part of what matters is that Min has not constituted her own mind the way that others have. It is not clear that her attitudes, including her beliefs, are as clearly hers.

    Jefferson presents rational grounds and evidence for his belief, appealing to widely accepted scientific theories. Still, those living in Jefferson’s time who were able to see these theories as fallacious and these beliefs as unfounded are given some praise. To see why such a person is responsible (and perhaps praiseworthy) and why some degree of mitigation of blame is appropriate in Jefferson’s case (though not as much as in Min’s) we need some notion of freedom and control. The “freethinker” of the eighteenth century is exactly that, able to exercise a kind of control over what she believes independent of the dominant forces exercising control upon her.

    One of the reasons we would blame Jefferson, Abigail and Bert (and Henning’s racist Trump supporter) for having this belief is that they are not believing the way they ought to believe. While there is a sense in which this is also true of Min, that is if we take it that there is an objective, normative reason to believe the proposition false but, even if we adopt some kind of subjective understanding of “ought” here, each of these three agents could exercise more control over their doxastic lives such that they came to see the truth. Min is excused because her powers of doxastic control have been taken away. For each of the other cases, there is some degree of blame for their failures in control, but Jefferson and Abigail’s powers have been partly hijacked, while Bert seems to have no excuse for not believing how he ought to.

    One final point of clarification: Henning claims that I argue for evidentialism on a pragmatic foundation, but I see myself as arguing against evidentialism because I think not all reasons for believing are evidential.

    [1]


    1. Thomas Jefferson, Notes on the State of Virginia, Queries 14 and 18, 137–43, 162–63, chapter 15, 1784, http://press-pubs.uchicago.edu/founders/documents/v1ch15s28.html.

Shares
Verified by ExactMetrics