Symposium Introduction

The aim of Patrick Bondy’s Epistemic Rationality and Epistemic Normativity is to explain the nature and the normative force of epistemic reasons and rationality, from the perspective of access-internalist evidentialism about epistemic rationality. The two central questions the book addresses are: why are epistemic reasons evidential? And, what explains the normative force of epistemic reasons and rationality?

Bondy’s initial strategy for answering the two central questions of the book proceeds on the basis of an evaluation of what he calls the Guidance and Transparency principles. Guidance is the principle that normative reasons for belief or action must be able to guide subjects in their deliberations about whether to have the belief or perform the action in question; Transparency is the principle that we can only consider what we take to be evidence for p in deliberating about whether to believe p. The conjunction of these principles seems a good path to answers about why epistemic reasons are evidential. However, Bondy accepts Guidance but rejects Transparency, and so he argues that this strategy fails. So another path must be cut for the explanation.

Bondy argues in defense of a direct form of acceptance-voluntarism. This stands in contrast to belief-voluntarism. Belief involves a kind of feeling or a disposition to feel that a proposition is true, or probably true; so, since feelings are not directly subject to the will, belief is not directly subject to the will. Acceptance, however, involves taking up or being willing to take up a proposition as true in one’s deliberations, and that is subject to the will. Belief and acceptance normally go hand-in-hand, but they can come apart, and when they do, acceptance is what we should be interested in, from an epistemic point of view.

With the acceptance-voluntarism in place, Bondy considers another path to the evidential nature of epistemic reasons. He sets out the instrumental conception of the nature of epistemic rationality, according to which beliefs are epistemically rational just in case holding them is an appropriate means to take for achieving some epistemic goal, such as the goal of achieving true beliefs and avoiding false beliefs. Despite the fact that this program seems very appealing, Bondy argues against the instrumental approach to epistemic rationality. Bondy argues that the instrumental approach to the nature of epistemic rationality fails to get the extension of epistemically rational and irrational beliefs right, and that the view generates vicious regresses of instrumentally rational beliefs in every case in which a subject has a rational belief or performs a rational action. The question remains: how can evidentialism be true?

To close, Bondy proposes a deflationary explanation of why epistemic reasons are evidential. Briefly, the explanation is that to every category of reasons, there corresponds a kind of rationality; evidential reasons are one kind of reason; so there is an evidential kind of rationality, which we call “epistemic rationality.” In this regard, Bondy’s argument is a modified form of the instrumental model of the normativity of epistemic rationality. The epistemic rationality of our beliefs does not depend on the goals that anyone has or ought to have, but whether we ought to have the beliefs that our epistemic reasons support does depend on whether we have normative reason to get to the truth with respect to the propositions in question. Normally we have such a normative reason, but sometimes we don’t.

Avatar

Response

On Bondy’s Conception of Reasons

In Epistemic Rationality and Epistemic Normativity (Routledge, 2018) Patrick Bondy seeks to explain two points: “why epistemic reasons consist of evidence,” and “the normative force of epistemic reasons” (2). For Bondy, while the nature of epistemic rationality is explained non-instrumentally, the normativity of epistemic reasons is explained instrumentally insofar as they track some valued property (e.g., truth conductivity). Bondy’s explanation of the first point is deflationary (136ff.): “to each category of reasons, there is a corresponding kind of rationality” and “the category of epistemic reasons . . . just is the category of reasons which are evidence” (137, emphasis added). Evidence for something is that which counts in favor of its being true: “An evidential reason, R, for believing that p is a reason which indicates that p is true” (37).

Bondy conceives of epistemic reasons within a framework of practical reasoning, as a kind of reason for doing something—namely, believing that p.1 The question then arises as to what sorts of things reasons are. Candidates include: (i) introspectable mental states (e.g., beliefs / propositional attitudes); (ii) (true) propositions (truth-bearers; the contents of beliefs); and (iii) (obtaining) states of affairs (truth-makers; the objects of beliefs). Premising that their motivating aspect requires reasons to be accessible to deliberating subjects (thereby satisfying the guidance requirement [42ff.]), Bondy concludes that reasons themselves must be mental states. “For it to be possible that a normative reason also be a motivating reason, the very same thing must be able to play both the normative and the motivating roles” (26).

These views give rise to two apparent problems.

Problem I: The Incongruity of Relata in the Reason and Evidence Relations

According to Bondy, epistemic reasons are evidence (137): “the sorts of reasons required for knowledge . . . must bear on the truth of one’s beliefs—and that means that these reasons are just what we typically call ‘evidence’” (138, emphasis added).2

Yet, consider the reason relation, R(r, j) (read: “r is a reason for j”). According to Bondy r is a mental state and j is an action—specifically a believing. (Thus, “R(r, j)” is more correctly read as “r is a reason to j” or “r is a reason for j-ing.”) When true in a motivating context, “R(r, j)” means that the intentional explanation of S’s j-ing includes r. When true in a normative context, “R(r, j)” means that r’s being so creates a prima facie obligation for S to j.3

Contrast this with the evidence relation, E(e, h) (read “e is evidence for h”), where evidence is something that indicates, or counts in favor of, the truth of something (else) (37). Prima facie, actions are not truth-apt; hence values for j cannot be values for h.

Suppose we give up Bondy’s practical reasoning framework, and take belief-states (beliefs), rather than acts of belief (believings), to be the values of j.4 Beliefs are truth-apt, albeit derivatively. Yet, this repair will not suffice; there remains a problem with the values of r being values of e. Belief-states are takings-to-be-true. And, except in special cases, takings-to-be-true do not count in favor of the truth of other things, even other takings-to-be-true. Rather, it is the truth of what we take to be true, not our taking them to be true, that counts in favor of other things being true. Thus, if the relata in the reason relation are belief states, they do not satisfy the evidence relation.

If they take relata of different kinds, is-a-reason-for and is-evidence-for are not relations of the same kind. Thus, neither can be an instance of the other, and even a deflationary identification of epistemic reasons with evidence fails.

Problem II: Reasons as Common Currency

Bondy wants “a theory of reasons according to which, at least in normal cases, normative and motivating reasons can coincide: they can be one and the same thing” (26). Yet, there are other apparent desiderata for a theory of reasons.

Reasons can be given, shared, transacted, and held in common. When we argue, we seek to make our reasons available to others. When we are persuaded by an argument, we accept its conclusion on the basis of the reasons appearing in it. Thus, we can believe the same things for the same reasons (e.g., when we are each persuaded by the same argument). Seemingly, the motivating reasons for which we believe something can be the same.

Yet, if reasons are analysed as particular belief states, they cannot be shared on an ordinary understanding of belief. Granted, I can express my beliefs by stating what I believe. But, in expressing my beliefs, I do not make them available to you in such a way as that you may use them as your own. (It is not my belief state that I make available to you to use as your own, but rather what I believe—the contents of that state.) Yet, making my reasons available for your use is precisely what I seek to do when I offer you my reasons. While reasons are inherently “public property,” beliefs are, so to speak, inherently “personal property.”

Not only are reasons public property, they are common currency. Our practices of transacting reasons presuppose the universality of their rational (e.g., epistemic) value. Yet, statist accounts leave the universality of normative properties of epistemic reasons (i.e., evidential relations) unexplained. If evidential relata are particular belief states, then why, ceteris paribus, does your belief that p enter into all and only the same evidential relations as my belief that p? Why is your belief that p a (good epistemic) reason to believe that q in all and only those (cognitive and worldly) circumstances where my belief that p is?5

A Possible Solution

A convenient answer to these different problems is that the common elements of our different beliefs that p—their contents and objects—not only supply the proper kind of relata to enter into evidential relations, but also account for the public accessibility and transactability of reasons together with the stability of their evidential value across persons. Specifically, a belief’s truth conditions are transparent to, and dependent on, the truth conditions of its contents and to the obtaining of its object.

There is a difference between having the same beliefs and believing the same things. The sameness of what we believe, rather than our beliefs, explains the common possession and value of reasons. And, it is the things that we believe, not our beliefs in them, that properly satisfy the evidence relation. When I offer you my reasons, I offer you my take on things in the hope that you will be persuaded to take things in the same way as I do. It is the way I take things to be, not that I take them to be so, that constitutes my evidence and which, thereby, ought to have persuasive epistemic force for you.

While it appears differently from the third-person and first-person-corrective perspective, the situation is the same for the motivating aspect of reasons. I settle upon a belief or course of action by taking into consideration apparent truths; I account for my beliefs and behavior by pointing to the truth of certain things. Of course, in order that I consult or cite those things they must be apparent or available to me. Yet, that reasons must be believed to play a motivating role needn’t make them beliefs. Even though reasoning involves transitions among beliefs, neither in deliberation not in justification do I consult or cite my beliefs per se; rather what I take to bear on the matter is what I believe, not that I believe it. (Offering my reasons to you and offering them to myself is analogous in this respect.) Of course, sometimes I’m mistaken in what I believe; sometimes things seem true to me but aren’t. Yet, even in those cases, my beliefs and doings were not unmotivated; rather we now judge them to be mistakenly motivated, though perhaps excusably so. When using the language of motivating reasons from a third-person or first-person-corrective perspective, we require a way of giving a reason without endorsing that reason. One way we employ the language of belief is to do just this: we express that some subject, S, has a commitment to p without taking on that same commitment ourselves. It is only in derivative cases like these that we cite beliefs themselves, rather than what is believed, to do the explanatory or (subjective) justificatory work we require of reasons.

Thus, an account of reasons as the public elements of beliefs—their contents and/or objects—seems to better answer the needs of a theory of reasons where epistemic reasons are analysed as evidence.6


  1. This approach seems to be a consequence of Bondy’s endorsement a strong doxastic voluntarism over the active elements of believing, which he takes to be necessary for our properly taking reactive attitudes of praise and blame towards doxastic (and hence epistemic) states of believers (ch. 4).

  2. I read Bondy here to endorse the conditional “If you have an epistemic reason, then you have evidence,” bit not its converse “If you have evidence, then you have an epistemic reason.” If only items satisfying the guidance requirement are reasons, someone can have indicator evidence without thereby having an epistemic reason, if the evidential value of the evidence is not apparent to them, e.g., because they lack other reasons upon which the epistemic value of the evidence depends. Thus, the same piece of evidence may give you an epistemic reason without thereby giving me one.

  3. Bondy’s expression of the normative force of reasons in obligatory, rather than permissive terms, may derive from his evidentialism, according to which one ought to accord the strength of one’s belief to the strength of one’s evidence.

  4. Repairing this while retaining the practical reasoning framework would involve reconceiving of the nature of evidence; perhaps evidence is whatever permits (or obliges) certain believings.

  5. An option for Bondy here might be to claim that reasons are multiply-instantiable types of belief states, such that many states of belief can token the same type of belief state. Nevertheless, if the normative aspects of reasons are explained solely by their type-specific properties, while their motivating aspects are explained by their token-specific properties, we do not yet have the desired identification.

  6. I offer my thanks to Matt McKeon for his insightful and constructive comments on an draft version of these remarks.

  • Avatar

    Patrick Bondy

    Reply

    Mental States as Reasons and as Evidence

    I have argued (i) that the doxastic attitude of acceptance is in principle voluntary, and (ii) that reasons in general, and epistemic reasons in particular, are constituted by mental states. I have also proposed (iii) that epistemic reasons consist of evidence. In his commentary, Godden provides two main lines of argument against these claims. The first one targets (i)–(iii) together, while the second targets (ii).

    1. The Relata of Evidential Support Relations

    In his first line of argument, Godden makes two key points:

    (i) Because I locate doxastic attitudes in a broader practical reasoning framework, and I hold that doxastic attitudes (acceptances, or beliefs loosely speaking) are under our voluntary control, the mental phenomena in which I am interested are more like dynamic actions than continuing states. But actions are not truth-apt, and only what is truth-apt can be evidentially supported. So we need to switch to an epistemology of (static) belief rather than (dynamic) believing, since beliefs are at least derivatively truth-apt.

    (ii) Even if mental states can be evidentially supported (derivatively), they cannot themselves be evidence. As Godden writes: “Belief-states are takings-to-be-true. And, except in special cases, takings-to-be-true do not count in favor of the truth of other things, even other takings-to-be-true. Rather, it is the truth of what we take to be true, not our taking them to be true, that counts in favor of other things being true” (Godden’s emphasis).

    In response to (i), I do conceive of doxastic attitudes as existing in a broader practical reasoning framework, given that I take acceptances to be in principle responsive to non-epistemic reasons, and under our direct voluntary control. But I don’t think of acceptances as actions; I think of them as attitudes, albeit voluntarily held ones. There is sometimes a mental action involved in S’s coming to accept that p (but usually there is not: normally, acceptance is automatic); but once S has accepted p, that remains a doxastic attitude of S’s, one that S need not actively think of, but which remains in principle subject to S’s direct voluntary control.

    In response to (ii), I don’t think that the truth of what we take to be true is what really counts in favour of other things being true. For I think that the relation of counting-in-favour is always counting-in-favour for some subject. And the truth of propositions that S doesn’t take to be true cannot count in favour of the truth of other beliefs for S, nor can they be taken into account in S’s deliberations. S’s assignment of truth, or at least of a probability, to propositions can count in favour of the truth of other propositions for S, and can enable S to take propositions up in deliberation. In my view, the actual truth of a proposition by itself doesn’t count in favour of or against anything.

    Further, it sometimes happens that S takes up a false proposition p in S’s deliberations, which S is justified in taking to be true. In such cases, the truth of p cannot be what counts in favour of the truth of the target of S’s deliberations, q—for p is not after all true. Rather, in such cases, it is p’s presumed truth, i.e., it is S’s taking p to be true—or perhaps better, it is S’s justifiedly taking p to be true—that counts in favour of the truth of q for S.

    To be sure, the logical and probabilistic relations of the contents of our mental states are also essential parts of what constitutes our evidence. A mental state M is only evidence for a doxastic attitude D if the content of M is appropriately related to the content of D. Granted, we normally do talk as though M’s propositional content, m, is by itself evidence for D, or for D’s propositional content, d. But as I see it, that is only an approximation of the truth. A better picture would be that M is the evidence for D, in virtue of (a) M’s positive epistemic standing for S, (b) M’s truth-value- or probability-assignment to m, and (c) the logical or probabilistic relations between m and d.

    2. Sharing Reasons

    Godden’s second line of argument targets my defense of a mental-state ontology of reasons. I have argued that a desideratum for a theory of reasons for φ-ing in general is that it yield the result that the very same things can both motivate and normatively support S’s φ-ing. That desideratum pushes us toward a monistic ontology of reasons. And I have also argued that the role that reasons play in reasoning should push us toward accepting a mental-state ontology of reasons, for it is truth-value- or probability-assignments to propositions which we use in reasoning, and these are kinds of doxastic attitudes.

    Godden argues that there are other desiderata for a theory of reasons. In particular, a crucial desideratum is that the theory enables us to explain how it is that reasons can be shared, or how they can be held in common. For example, suppose that Wilma tells Fred that they are out of milk, and so Fred should get some at the store. It is natural to think that Wilma and Fred now possess the very same reason, that they are out of milk. That reason, which they possess in common, is a reason for both Wilma and Fred to believe that they need to buy more milk.

    But if reasons really consist of mental states, then sharing reasons must be impossible. All of our reasons would be radically private—not in the sense that we cannot know about each other’s reasons, but in the sense that no one can ever possess the reasons that another person possesses. Wilma cannot share her reasons with Fred, because she cannot literally give her visual experience, and her consequent justified belief that they are out of milk, to him. Wilma has a reason for belief and action, which consists of her mental states; then when she tells Fred that they are out of milk, a whole new reason pops into existence in Fred’s mind, in the form of his new mental states. Isn’t this view of reasons just too awkward to accept?

    Godden suggests that a solution to this worry would be to hold that reasons are propositional contents: they are what we believe, not our beliefs or other mental states themselves. If reasons are propositional contents, then they can be held in common, for we can each have different token beliefs with the same contents. Further, Godden argues that propositional contents can be motivating: they can explain the mental transitions we make in reasoning. So, taking reasons to be propositions, we can still satisfy the desideratum that the same thing can both motivate and normatively support beliefs.

    I am not convinced that propositions themselves do motivate beliefs, though. For one thing, given that propositions are abstract objects, they aren’t the kinds of things that can cause concrete events, such as the event of motivating a person to form a belief. But mental states are concrete, and can give rise to further mental states. So mental states are well suited to being motivating reasons.

    But it’s not just the abstract nature of propositions that makes them unsuitable to motivate belief. Even if propositions can enter into causal relations, still they are not the kind of thing that a person normally bases her beliefs on. For example, suppose that I look in the cupboard, see very little coffee remaining, and form the belief that I need to buy more coffee. I now have two beliefs: B1, that I am nearly out of coffee, and B2, that I need to buy more coffee. And note that I do not base B2 on the proposition that I am almost out of coffee. Rather, I base B2 on the presumed truth, the acceptance, of the proposition that I need to buy more coffee. (Not on the actual truth of that proposition itself, for I might be mistaken!) I hold B2 on the basis of B1, not on the basis of B1’s propositional content or the truth of its propositional content alone.

    So I think we should take mental states to be both motivating and normative reasons, and I am therefore saddled with the view that different people can never literally hold the same reasons in common. But I don’t think that this view is awkward or implausible. We can still tell each other about the reasons we possess, thereby generating new mental states in each other. These newly generated mental states are private to each individual, but we can still know about each other’s reasons, and so we are able to reason and to act together, even though reasons are private.

    So, when Wilma tells Fred that they are out of milk, a new reason really does pop into existence in Fred’s mind. But there’s nothing strange about that. After all, something pops into existence in Fred’s mind when Wilma tells him that they are out of milk, viz. his new belief that they are out of milk. In my view, that new justified belief itself, rather than its propositional content, is the reason Fred has come to possess.

Avatar

Response

Swamping and Our Epistemic Goals

The overarching project within the book Epistemic Rationality and Epistemic Normativity is to answer two questions concerning epistemic normativity: (1) why are epistemic reasons evidential, and (2) what exactly makes epistemic reasons and rationality normative? In answering the first question, Bondy offers us a deflationary account regarding the ways in which particular kinds of reasons are demarcated from one another. They assert that “reasons are divided in arbitrary, and arbitrarily many, ways” (39). However, the categorization of evidential reasons allows us to accurately assess clear-cut cases of irrational and rational beliefs, because some evidence can either count for or against our beliefs. Since it is the case that particular reasons can count as evidence or counter-evidence, then the categorization of evidential reasons exists. Bondy argues that we are indeed being guided by evidential reasons. What distinguishes Bondy’s account is the strong conclusion that there are both active and passive aspects of belief; moreover, that the active aspect of our beliefs is subject to our control. Not only does Bondy provide a case from strong doxastic voluntarism, but they also claim that if the strong version of their doxastic voluntarism is correct, then the transparency principle misleads us.1

What can help guide us is the instrumental conception of the nature of epistemic rationality and it is through this examination that Bondy aims to give provide an answer to both questions 1 and 2, because “instrumentalism about epistemic rationality makes epistemic rationality essentially related to the achievement of a goal that is some sense valuable, it comes with an account of epistemic normativity as a built-in feature” (99). Ultimately, Bondy jettisons instrumentalism in regards to the nature of epistemic rationality, but retains instrumentalism pertaining to epistemic normativity. Ultimately, in answering question 2, Bondy’s stance is that epistemic reasons and rationality are only normative when we have normative reasons to obtain the truth. Bondy argues “even though epistemic rationality is not instrumental in character, it is nevertheless typically instrumentally valuable,” and that the normativity of epistemic reasons and rationality is instrumentally normative (135).2

While I find Bondy’s arguments interesting and plausible, much of my commentary regarding the previously outlined aims of the book hinges on how Bondy intends to handle the swamping problem. So, the focus of this response is on Bondy’s assertions that their view is not vulnerable to the swamping problem, which they outline within the final chapter of the book. My primary issue is the way in which the Swamping problem is articulated, especially the example used to illustrate what the swamping problem entails.

Roughly put, the swamping problem is the notion that knowledge, or the value of epistemic rationality or justification, has no additional value over true belief. The swamping problem asks “what is the purpose of J, or epistemic rationality, if all we are instrumentally after is true belief?” If, as Bondy posits, epistemic rationality possesses instrumental value, then it does indeed become difficult to specifically locate why a justified false belief is better than a merely false belief. Bondy takes two lines as to why their stance does not fall prey to the swamping problem: (1) the swamping problem really isn’t even a problem, and (2) “we should simply give up the claim that there is always instrumental value in taking the appropriate means to achieve a goal” (150). In other words, we should reject the swamping problem as a problem, and acknowledge that normativity of instrumental reasons does not root itself in the instrumental value of achieving a particular goal.

Focusing on 1, I aim to show that Bondy too quickly brushes the swamping problem aside. The only example that Bondy utilizes in engaging with the Swamping problem is the case of the lottery ticket winning a boat. Within the case, S purchases a lottery ticket where only one ticket will win. Prior to the drawing Friday, the ticket has positive instrumental value, because S could be the sole winner. Come Friday, the ticket purchased by S is indeed the winning ticket, so after the winning boat is collected, the winning ticket no longer holds instrumental value. That is to say, the winning ticket in conjunction with the won boat adds no more value than the won boat itself. Bondy rejects the swamping problem on the basis that (the problem articulated this way) is a bad analogy: our beliefs do not operate akin to winning a lottery boat. Bondy states, “Beliefs are not things we can simply stockpile locked away in a storage shed, and hold forever at the ready to be take out as needed in the same condition as when they were put into storage. Maintaining our beliefs takes work” (148). I agree that beliefs do take work and the winning lottery of the boat case does not adequately illustrate the maintenance of our beliefs or the ways in which we revisit our epistemic rationality to repeatedly achieve a goal. But it does not seem as though Bondy can reject the swamping problem based on the analogy of the lottery winning boat, especially as they, along with Carter and Jarvis (2012) whom Bondy cites as the articulation of the case, acknowledge that it is a bad analogy.

I want to press Bondy on defending their view with an example that does require repeated revisiting of a particular mechanism or utilizing an example that does not accomplish its task and then it is over. Often having true beliefs is an accomplishment that needs to be reexamined and fine-tuned as new evidence and conditions come into play. The often cited (Zagzebski 2003, Pritchard 2011) coffee maker example would seem to me a much more apt case in exemplifying the swamping problem. And given Bondy’s numerous references to coffee and their subtle self-professed love for the drink, one I believe they can appreciate.

We philosophers tend to value a great cup of coffee (some of us multiple times a day) and it follows seemingly like we would value a reliable coffee machine from which we can get our cup of coffee. However, it’s not the machine that we value, it is the great cup of coffee that was produced by the coffee machine. In the end, the reliable coffee machine doesn’t necessarily give any addition value to the end goal of obtaining a good cup of coffee. And one can see such is the case when we consider the state of poor graduate students not necessarily buying the best coffee machine—perhaps it was found at Goodwill or left over in their apartment from a previous tenant. Poor graduate students are willing to spend the possible time to repeated attempting to make a great cup of coffee, if eventually they will yield a satisfactory cup. Why? Because the value of a reliable coffee machine is swamped by the value of the cup of coffee in virtue of it being a great cup of coffee. We can apply such a case to our epistemic rationality. Once we have a TB, then it doesn’t really matter to us, epistemically, if it was reliably formed or not. We only value reliable TB mechanisms as a means to TB.

If I could wave a magic wand and suddenly afford a top-of-the-line coffee marker, I would. I could save time in repeatedly having to remake my cup of coffee, until I have a great cup of coffee. Similarly, if I could wave a magic wand and suddenly possess a reliable epistemic mechanism to only produce TBs, I would. But this seems to be for reasons that are related to non-epistemic values such as time management—not for the TB itself, especially when if enough attempts are made, I’ll eventually have an apt one. But Bondy states that “believing what the evidence supports is the best way for us to get true beliefs. So, believing what the evidence supports is the best way to try to do something that is often important to do, and so we are generally normatively required to believe what the evidence supports” (139). While it may be the best way in terms of reliability, it really shouldn’t matter to us (epistemically) how the true beliefs were obtained, as long as they were obtained.

Bondy purports that they have shown that “it is not always the case that instrumental value is swamped by the presence of the final value that it is instrumental for bringing about,” but by just citing the one boat lottery example, I hesitate to agree that they have shown such (149). And given their stance on instrumental value, this seems to potentially pose a major problem.


  1. The transparency principle is articulated as “when we want to determine whether we ought to believe that p, we always find that our inquiry immediately gives way to the question of whether p is true” (Bondy, 42).

  2. Although Bondy remains neutral concerning where/what the source is regarding instrumental normativity.

  • Avatar

    Patrick Bondy

    Reply

    Instrumental Value and Normativity

    I have argued in defense of the instrumental normativity of epistemic reasons and rationality: when we have normative reason to get a true belief with respect to some proposition, we have instrumental normative reason to believe what the evidence supports, because that is the appropriate means to take for the purpose of achieving true beliefs.

    The swamping problem poses an important challenge to accounts that ground epistemic normativity in the instrumental value of having epistemically rational beliefs. The worry is that having justification doesn’t add anything to the truth-related value of ether true beliefs or false beliefs. If a belief is true, then it already has all of the truth-related value that it can have; if a belief is false, then it must fail to have any truth-related value. Just as a good cup of coffee isn’t made any better by having been made by a good coffee maker, nor is it made any worse by having been made by a bad coffee maker, so too beliefs aren’t made any truer or any less true by the evidential quality of the basis on which they are held. The value of truth just swamps the instrumental value of justification.

    So there are two main aspects to the problem: (1) to explain how justified true beliefs are epistemically better than unjustified true beliefs; and (2) to explain how justified false beliefs are epistemically better than unjustified false beliefs.

    Henning’s commentary challenges my response to the swamping problem. My response has two parts. First, I argued that having justification for one’s beliefs at a time t often does have some instrumental value, for having justification helps us keep our mental house in order: it helps us retain true beliefs over time, and adjust them so that our true beliefs do not become false as the facts change. It also helps us replace our false beliefs with true beliefs, when we acquire new evidence bearing on them. Sometimes justification is not able to do any of these things, in which case justification does not add any instrumental value to belief; but often it does.

    Second, even when justification does not end up helping one to retain true beliefs, or appropriately alter one’s beliefs over time, one still has instrumental normative reason to take the appropriate means to achieve true belief and avoid false belief. I argued that instrumental normativity doesn’t depend on the presence of instrumental value, which means that the swamping problem doesn’t pose a problem for my account of instrumental normativity. The existence of instrumental normative reason to take the appropriate means to achieve a goal is grounded in the value of the goal itself, even if the means do not have any instrumental value because they cannot in the end contribute to the achievement of the goal.

    One way to illustrate the swamping problem, and my response to it, is with the case of the boat-lottery, which I discussed in chapter 7. I think the case illustrates the apparent problem clearly, but Henning is right to point out that a boat-lottery is done once to get one boat, whereas belief-formation is done repeatedly using the same methods to get many new beliefs. So it’s worth thinking about how my response to the swamping problem handles problems like the instrumental value of good coffee makers (Zagzebski 2003), which are repeatedly used to achieve a desired goal. The swamping problem, as it applies to coffee makers, is that a good cup of coffee—or a bad cup of coffee, for that matter—isn’t made any better by the fact that it has been made by a good, reliable coffee maker. Similarly, a good cup of coffee isn’t made any worse by the fact that it’s been made by a bad, unreliable coffee maker, and again the same goes for bad cups of coffee made by bad coffee makers.

    The use of coffee makers does seem to be a better analogy to belief-formation than participation in boat lotteries. We use coffee makers every day, and rely on them to produce something that we care about. So, of course, there is instrumental value in having a good coffee maker, because the machine will produce the valuable result (good cups of coffee) more often than a bad one would. Similarly, we use belief-forming processes very frequently to get true beliefs, which are normally good to have, and there’s instrumental value in having reliable or responsible belief-forming methods at one’s disposal. But in the case of the coffee maker, the instrumental value in having a good coffee maker doesn’t transfer to, and contribute to the value of, the resulting cup of coffee once it’s been produced. Similarly, the argument goes, the instrumental value of reliable or responsible belief-forming processes and policies does not transfer to the beliefs themselves that are produced by these processes and policies. So even though there’s instrumental value in having these processes or policies, the beliefs they produce don’t thereby have any added value. So it just can’t be instrumental value that grounds the instrumental normativity of having justified beliefs.

    In response to this argument, there are two points I want to make. The first is that coffee makers are more like belief-forming processes than boat-lotteries are, because they are mundane and repeatedly used to achieve a desired result, but the uses of coffee makers and belief-forming processes are still disanalogous in that the property of having been produced by a reliable coffee maker does not contribute in any way to the continued quality of the cup of coffee once it’s been produced. On the other hand, the property of being rationally held on the basis of good evidence does contribute to the maintenance of true beliefs, and it contributes to a subject’s ability to replace false beliefs with true ones over time, as better evidence comes in. That is the first prong in my two-part response to the swamping problem: justified beliefs are better than unjustified ones, for the purpose of achieving and maintaining true beliefs and avoiding false ones over time. So being justified does contribute instrumental value in many cases, whereas having been produced by a reliable coffee maker does not contribute instrumental value to the cup of coffee once produced.

    The second point to note is that, in some cases, being held on the basis of good evidence is not able to contribute to a subject’s achieving true beliefs and avoiding false ones. So there is no instrumental value in having rational beliefs in such cases. Similarly, we can think of a hopelessly broken coffee maker, which is doomed forever to make bad cups of coffee. Such a coffee maker has no instrumental value. Nevertheless, there is still instrumental normative reason to take the appropriate means to achieve the valuable goal of having a good cup of coffee; and the appropriate means, until it becomes apparent that the coffee maker is broken, involve using the coffee maker. So too, in cases where justification is utterly unable to get us true beliefs, there is no instrumental value in having justified belief. Nevertheless, there is instrumental normative reason to take the appropriate means to achieve true beliefs and avoid false ones. That is the second prong in my response to the swamping problem: instrumental normative reasons do not depend on the existence of instrumental value, only on the final value in question, which we ought to do what we can to achieve.

     

    Works Cited

    Zagzebski, Linda. “The Search for the Source of Epistemic Good.” Metaphilosophy 34.1–2 (2003) 12–28.

Avatar

Response

Being Able to Recognise a Reason as Normative

I wish to explore the notion of normative reason that issues from the relationship among at least three basic constituents of Bondy’s view. The first one appears in the beginning of the book. Bondy assumes that accessibilist internalist evidentialism is an important framework to comprehend the normativity and the rationality of the epistemic (2). This means that to evaluate the epistemic status of a belief p one must consider the evidence that S has to believe that p, where evidences are epistemic reasons and reasons are generally taken to be mental states (23). Also, this evidence must be internal, that is, it has to be part of S’s cognitive perspective. Finally, Bondy (4) explains that a mental state is part of S’s cognitive perspective whenever it is available to be accessed by reflection.

The second element appears also in the beginning of the book, but becomes fully explicit only in the first paragraphs of chapter 3. There Bondy presents Guidance as an alleged necessary condition for a reason to be a normative reason (42):

Guidance: for all subjects S, potential reasons R, and beliefs or actions φ: for R to count as a normative reason for S to φ, it must be possible for S to take R into account as relevant to the determination of whether S ought to φ.

His interpretation is that “S must be able . . . to take R into account” if R is to be a normative reason for S to believe that p. Besides, Bondy thinks that no motivational set ought to lurk in the background of S for her to φ (45).

Finally, Bondy defends that a reason is normative only when there is value in S’s φ-ing. Such is a necessary condition (143). The sufficient one emerges when S is able to account for this value.

If I understood correctly, these three elements amount to the following perspective of epistemic rationality and normative reasons according to Bondy: whenever S deliberates whether to accept p, she must be able to access the evidences which support p. If she accepts p based on evidences, then her belief, or acceptance, is considered epistemically rational. Moreover, if the reason is normative, S is able to account not only for it but for the value of getting to the truth of wether p, i.e., for the value of having evidences for the proposition accepted. Put differently, the reason becomes normative once S see that it is relevant for deciding wether p is true.

Now, my concern is with how these conditions concur to generate a mental state which is a normative reason. Mostly, my attention is drawn to when one is “able to take R into account.” I argue that the assumption of this ability is an unexplored issue in Bondy’s theory.

For a better understanding, I work with an example adapted from Turri (2010, p. 322). This author brings a case where Ron supposedly knows that invading Iran would be stupid, and that if invading Iran would be stupid, the United States ought not to invade. Intuitively, it is epistemically rational for Ron to accept the proposition “the United States ought not to invade Iran,” which means that there is evidence available for him to accept it or that there is a normative reason to do so. The point, though, is that Ron is psychologically incapable of coming to accept it, due to his long-standing exposition to right-wing radio and Fox News. He has under his cognitive perspective that “invading Iran would be stupid” and that “if invading Iran would be stupid, the United States ought not to perpetrate the invasion.” But he can’t access these propositions as evidence to accept that “the United States ought not to invade Iran.”

Ron’s case describes someone who is able to account for propositions relevant for the truth of a further one under his cognitive perspective, that is, propositions which are graspable through reflection. I suppose, based on the characteristics of the example, that Ron values getting to the truth of whether the United States ought to invade Iran. It appears, then, that most of the prerequisites for Ron to have normative reasons to deliberate are contemplated. The missing detail is Ron’s accounting for the propositions as relevant for the acceptance of a further one. He apprehends these contents as they appear, but he is unable to perceive them as evidential reasons. Intuitively, though, I think it is difficult to deny these propositions the status of normative reasons. This is because we tend not to see Ron as a regular case of an ideal reasoner. I believe the same can be said of Bondy’s conception of normative reasons. This is something conceived presupposing an ideal reasoner. But such conception is not simple to the point of being merely presupposed. By discussing an ideal reasoner, we penetrate a terrain where abilities, situation and ambiance of deliberation are considered.  If I am not mistakenly judging the scope of Bondy’s theory, these aspects are due for a more satisfying framework.1

Works Cited

Turri, John. “On the Relationship Between Propositional and Doxastic Justification.” Philosophy and Phenomenological Research 80.2 (2010) 312–26.


  1. This is a very close point to the one Turri offers in his paper when discussing propositional and doxastic justification.

  • Avatar

    Patrick Bondy

    Reply

    Voluntary and Involuntary Aspects of Cognition

    In his commentary, Rocha challenges my combination of doxastic voluntarism, accessibilist internalism about epistemic rationality, and a deliberative guidance constraint on normative reasons. These three elements combined, Rocha writes, yield the result that “whenever S deliberates whether to accept p, she must be able to access the evidences which support p. If she accepts p based on these surveyed evidences, then her belief, or acceptance, is considered epistemically rational.”

    Rocha’s main target is to show that our doxastic attitudes—more precisely, our acceptances—are not as subject to direct voluntary control as I have suggested. Briefly, I hold that S’s acceptance or non-acceptance of a given proposition p is in general subject to S’s will. S’s beliefs are not in general subject to S’s will, because belief crucially involves a feeling or a disposition to feel that p is true, and feelings are not in general subject to the will. But acceptance is not limited in that way. To accept that p is to take p up or to be disposed to take p up as a premise in theoretical and practical reasoning.1 Normally, S needn’t make any voluntary decision about whether to accept that p; acceptance normally just goes hand in hand with belief. But acceptance and belief can come apart. When they do come apart, the important thing is to be justified in what we accept, for our acceptances are what we take up in our deliberations. So acceptances are really what we should be concerned with in epistemology—but it is mostly harmless to continue using the terminology of beliefs to refer to the mental attitudes that are epistemically and normatively interesting.

    Rocha argues that “a few involuntary aspects of our cognition may influence how we evaluate the epistemic rationality of beliefs.” He offers two ways in which our acceptances, or the rationality of our acceptances, can be affected by factors outside of our control. The first is that our acceptances themselves are not as directly subject to our voluntary control as I suggest. The second is that the evidence that bears on our epistemic statuses need not be available to the subject in the way that Guidance requires.

    Rocha’s first line of argument appeals to propositions that are manifestly absurd. If acceptances are directly under our voluntary control, then we should be able to accept even absurd propositions. But, she argues, it’s not clear that we can do so. For example, consider the proposition that the Earth is hotter than the Sun. As I suggested in chapter 4, it is not so easy to accept that proposition, because one’s resulting picture of the world might be too incoherent, and one would then be at risk of great harm. So it would take a very significant incentive to bring a person to accept that proposition. Rocha suggests that I have committed myself (94) to saying that I would not be able to bring myself to accept that the Earth is hotter than the Sun. But in fact I think that I could bring myself to accept even such a manifestly absurd proposition, if I had a good enough incentive to do so. Cash rewards are probably not enough by themselves, but perhaps if there were more significant stakes, they would be enough. “Suppose, for example, that my nemesis has a string of nuclear bombs planted across the globe, which he will set off unless I accept that the earth is hotter than the sun” (94). I suggest that this incentive would be enough to bring me to go ahead and accept that proposition. So I think that acceptances are after all under our direct voluntary control. But just as our (basic) actions are under our direct voluntary control, even though we would not normally perform very dangerous or manifestly absurd actions without a significant incentive for doing so, so too we require significant incentives to bring us to accept manifestly absurd propositions.

    Rocha’s second line of argument appeals to evidence that a subject S cannot properly take into account in deliberation. As I understand the objection, the point is that such evidence can bear on the epistemic rationality or irrationality of S’s beliefs, even though S cannot form the belief that the evidence supports, and hence even though it violates Guidance.

    The example of Ron illustrates the objection. Ron knows the following:

    (i) It would be stupid for the United States to invade Iran.

    (ii) If it would be stupid for the United States to invade Iran, then they should not do so.

    (i) and (ii) appear to support

    (iii) The United States should not invade Iran.

    But Ron is too steeped in right-wing talk radio and television, and is incapable of believing (iii). So, Rocha writes, Ron possesses evidence which bears on the rationality of his beliefs, but which he cannot access and properly take into account in deliberating about whether to believe (iii).

    The case of Ron is indeed a puzzle. There seems to be a serious cognitive malfunction at hand. If Ron is able to occurrently think of the propositions (i) and (ii), and he wonders whether he should believe (iii), and he retains his knowledge of (i) and (ii), and yet he is unable to form a belief in (iii), then Ron appears to be like Drugged Dean (15), who is so far under the influence of some drug that he is utterly incapable of controlling how he responds to evidence. In that case, I argued, Dean ceases being a genuine epistemic agent subject to genuine normative epistemic permissions and prohibitions. Similarly, if Ron is not just biased against belief in (iii), but is genuinely incapable of believing (iii) even when he occurrently considers (i) and (ii), then Guidance entails that Ron does not after all have a normative epistemic reason for believing (iii). That seems to me to be the right result. Still, Ron is able to consider (i) and (ii) in his deliberation about (iii) (even though he can’t form the belief they support), and so they are evidence he possesses. Consequently, according to the internalist accessibilism about epistemic rationality I’ve adopted in the book, (i) and (ii) render Ron’s disbelief in (iii) epistemically irrational. His epistemic irrationality here has no normative force, but it is epistemic irrationality all the same.

     

    Works Cited

    Bondy, Patrick. Epistemic Rationality and Epistemic Normativity. New York: Routledge, 2018.

    Cohen, L. Jonathan. An Essay on Belief and Acceptance. Oxford: Oxford University Press, 1992.


    1. This version of the belief-acceptance distinction comes from Cohen (1992).

Avatar

Response

A Note on Instrumentalism of the Normative Force of Epistemic Rationality

Bondy argues against the instrumental notion of the nature of epistemic rationality, and he defends the following instrumental notion of the normative force of epistemic rationality (142):

(N) Necessarily, if a person S has an epistemic reason to believe a proposition p, then [S has a normative reason to believe that p if and only if S has a normative reason to get to the truth with respect to p.]

So, Bondy thinks that the normative force of epistemic reason is instrumental: one’s epistemic reason is a real reason when and only when one has reason to care about the relevant truth.

I will raise some criticism of N in this note. My criticism focuses on those cases where one’s evidence is misleading. I argue that, in those cases, even given that one cares about the truth, it’s still unclear how one has a normative reason to follow the misleading evidence.

Let’s imagine the following case. Adam has a reason to form a true belief about whether it’s going to rain tomorrow—he is considering whether to host a barbecue tomorrow. He checks the weather forecast, which says that it’s going to rain. But this evidence is misleading: in fact it’s not going to rain. In this case, Adam has an epistemic reason to believe that it’s going to rain. But does he have a normative reason to believe so? N predicts that he does. I argue that this prediction is mistaken.

First, Adam clearly doesn’t have a final reason to believe that it’s going to rain. If he had any reason, the reason could exist only because he cares about the truth of the issue. So, the reason predicted by N can only be an instrumental one. But does Adam have an instrumental reason (note that both final reason and instrumental reason are real normative reasons)?

Normally, we think that one has an instrumental reason to do F when doing F is an appropriate means for one to do G, where G is something one has normative reason to do. In this sense of “instrumental reason,” Adam doesn’t have instrumental reason to believe it’s going to rain, because such belief is not in fact an appropriate means to get the truth.

Naturally, Bondy would dispute this notion of instrumental reason. On p. 125, Bondy discusses the following principle about instrumental rationality:

Restriction on Instrumental Rationality (RIR)

It’s instrumentally rational for a subject, S, to adopt means M for achieving goal G only if the evidence available to S makes it epistemically rational for S to think that M is an appropriate means for achieving G.

As Bondy has explained, RIR joined with an instrumental understanding of the nature of epistemic rationality would lead to a regress. But presumably, RIR itself is still acceptable to Bondy. And although RIR is about instrumental rationality, if we assume that rationality is responding to reasons, then we can reformulate RIR into a principle about instrumental reasons for beliefs:

Restriction on Instrumental Reason (RIR*):

When S has reason to form a true belief about p, S has an instrumental reason to adopt the attitude of believing p as a means for achieving this goal just in case: S has epistemic reason to think that believing p is an appropriate means for achieving the goal.

RIR* implies that instrumental reason is not about appropriate means as a matter of fact, but about appropriateness according to one’s evidence. RIR* would predict that Adam’s epistemic reason to believe it’s raining is indeed an instrumental reason. This is because, presumably, if one’s evidence supports p (“it’s going to rain”), it would also support

(M1) Believing p is an appropriate means to get a true belief about p

since p and M1 are equivalent. Therefore, given that Adam’s evidence supports “it’s going to rain,” it would support M1, and thus Adam would have an epistemic reason to believe M1. So, if Bondy could accept RIR*, then the prediction of N would be correct: Adam has a real (instrumental) reason to believe that it’s going to rain, even if the belief is false.

But the problem is that Bondy couldn’t accept RIR*, given his acceptance of N. N implies that epistemic reason is not a normative reason unless one has reason to care about the relevant truth. If one doesn’t have reason to get the relevant truth, then epistemic reason is not normative. Now, lets imagine that Adam has no normative reason to form a true belief about M1. Then although Adam has an epistemic reason to believe M1, N predicts that Adam doesn’t have a normative reason to believe M1. But if Adam doesn’t have normative reason to believe M1, then Adam’s epistemic reason to believe p couldn’t be a (normative) instrumental reason to believe p as RIR* predicts. Let me explain.

We have supposed that Adam doesn’t have normative reason to form a true belief about M1. Now, being consistent with this supposition, we can continue to develop Adam’s case by imagining that Adam actually has a normative reason to believe not-M1. So, the following claims hold:

a. Adam has normative reason to form a true belief about p,

  1. Adam has normative reason to believe (not-M1) “believing p is not

an appropriate means to get a true belief about p.”

So, it seems that Adam has normative reason not to believe p. Therefore, Adam’s epistemic reason to believe p is not a normative reason to believe p. This implies that Adam’s epistemic reason to believe p is not an instrumental reason, because a real instrumental reason must be a normative reason. But RIR* predicts that Adam does have an instrumental reason to believe p. Hence, Bondy couldn’t accept RIR*, given his acceptance of N. But if Bondy doesn’t accept RIR* as the correct principle about instrumental reason, it’s unclear how else to explain why Adam has an instrumental reason to believe it’s raining.

Now, perhaps Bondy would block the above argument this way: The above italicized supposition is illegitimate in Adam’s case. We shouldn’t suppose that Adam has no normative reason to form a true belief about M1, because we have supposed that Adam has normative reason to form a true belief about p. That is, if Adam has a normative reason to form a true belief about whether p, then Adam must have a normative reason to form a true belief about M1.

But there are two problems with this move. First, it’s implausible. It seems possible for one to care about getting the truth about a proposition, but doesn’t care about getting the truth about another proposition whose content is about appropriates means to the get the original truth. Second, it would lead to a vicious regress. For the same reasoning behind this move would lead us to the result that “for Adam to have a reason to form a true belief about M1, he must have a reason to form a true belief about M2”:

M2: Believing M1 is an appropriate means to get a true belief about M1.

The regress is exactly the same kind of regress faced with the combination of RIR with the instrumental notion about the nature of epistemic rationality. So, if the regress is vicious there, it would also be vicious here.

Here is my conclusion. N predicts that, when one cares about the truth, one’s epistemic reason is normative reason even if one’s evidence is misleading. But it’s hard to explain why that is the case, given that following evidence wouldn’t be an appropriate means to get the truth if it’s misleading. We would have an explanation if RIR* were correct, but Bondy cannot accept RIR* on pain of committing to a vicious regress. (I suspect that a plausible explanation must still be the alternative explanation of the normative force of epistemic reason discussed by Bondy, the one that relies on Guidance and Transparency.)

 

Works Cited

Bondy, Patrick. Epistemic Rationality and Epistemic Normativity. New York: Routledge, 2018.

  • Avatar

    Patrick Bondy

    Reply

    Misleading Evidence, Instrumental Reasons, and Higher-Order Beliefs

    I have argued that epistemic reasons and rationality are non-instrumental in nature—our beliefs can be epistemically rational or irrational, even if they do not promote the achievement of any of our goals—but that they are instrumentally normative: whether epistemic reasons are good reasons for belief depends on the goals a person has, or perhaps ought to have. So, Ye suggests that I am committed to principle N:

    (N) Necessarily, if a person S has an epistemic reason to believe a proposition p, then [S has a normative reason to believe that p if and only if S has a normative reason to get to the truth with respect to p.]

    In fact I am not committed to N, but I am committed to A*norm,1 which N implies:

    (A*norm) Necessarily, if S has normative reason to get to the truth with respect to a proposition p, then [if S has epistemic reason to believe that p, S has normative reason to believe that p]. (142)

    Whether p is in fact true is irrelevant to whether S has a normative epistemic reason to believe that p, according to N and A*norm. When S’s evidence indicates that p is true, and S has normative reason to get to the truth with respect to p, S has normative reason to believe that p—even if p is in fact false.

    I think that this is the right result. In general, I think that S can have normative instrumental reason to do things that turn out not to achieve the goals in the service of which they are instrumental reasons. That’s because we can have very good but misleading evidence indicating that some means are appropriate ones to take for achieving important goals.

    Ye argues that this is the wrong result. If p is false, Ye argues, then the misleading evidence supporting p cannot provide a normative reason for believing p. Ye develops the case of Adam as a counterexample to A*norm and N. The key facts in the case are as follows:

    (i)         Adam has a normative reason to get a true belief with respect to p, the proposition that it will rain tomorrow. (Adam is planning a barbeque.)

    (ii)       Adam has good evidence which indicates that p is true, or very likely to be true. (The weather forecast indicated a high probability of rain.)

    (iii)      p is false. (The forecast is misleading.)

    (iv)      Adam’s evidence also supports M1, the meta-belief that believing p is an appropriate means to get a true belief about p. (After all, evidence for p is also evidence for the higher-order proposition that believing p is an appropriate means to take for getting to the truth with respect to p.)

    (v)       Adam lacks normative reason to get a true belief about M1.2 (Perhaps, although he cares about p, he doesn’t care about having a higher-order belief about p’s effectiveness as a means to getting to the truth.)

    (vi)      Adam has normative reason to believe not-M1. That is, he has normative reason to believe that believing p is not an appropriate means to get to the truth with respect to p. (This normative reason must be non-epistemic, since (iv) stipulates that Adam’s evidence supports M1, and so he has epistemic reason to believe M1. But, as I’ve argued, there can be normative non-epistemic reasons for belief, so stipulation (vi) is fine.)

    So, is the case of Adam a counterexample to A*norm? Adam has a normative reason to get to the truth with respect to p, so the antecedent of A*norm is satisfied. And its consequent is false, Ye argues, because Adam has an epistemic reason to believe p, but he does not have a normative reason to believe that p. It turns out that his epistemic reason is not a normative reason.

    Why does Adam not have a normative reason to believe that p? Well, Adam has a (non-epistemic but) normative reason to believe that believing p will not be a good means to take for getting to the truth with respect to p (stipulation (vi) above), and Adam has normative reason to get to the truth with respect to p (stipulation (i)). So, Ye notes, “it seems that Adam has normative reason not to believe p. Therefore, Adam’s epistemic reason to believe p is not a normative reason to believe p.”

    I agree that (i) and (vi) above make a case that Adam has a normative reason not to believe p. But I don’t think it follows that Adam’s epistemic reason for believing p is not a normative reason. In my view, the case of Adam is a case of conflicting normative reasons: Adam has a normative epistemic reason to believe p, and he has a normative non-epistemic reason to not believe p. In general, normative reasons can come into conflict. (As it is sometimes said, normative reasons are generally pro tanto.) For example, I might have normative reason to do a workout tonight, and normative reason to finish this reply tonight. Perhaps I cannot do both, but both would be good to do. Such is life: sometimes we cannot do all of the things that would be good to do. In such cases, we have all-things-considered reason to act on the weightier normative reason, but it is not as though the less weighty normative reason disappears; it is still there as something we are missing out on. And that what’s going on in the case of Adam: his evidence, together with his normative reason to get to the truth with respect to p, provide a normative reason for Adam to believe that p. This normative epistemic reason conflicts with the normative non-epistemic reason Adam has to not believe that p.

    So I think that A*norm withstands the case of Adam, and consequently I still think that misleading evidence can provide normative reason for belief.


    1. N implies but is not implied by A*norm. A*norm only says that, necessarily, possessing both epistemic reason to believe that p and a normative reason to get to the truth with respect to p is sufficient for possessing a normative reason to believe that p. The consequent of N is a biconditional, while the consequent of A*norm is only a one-way conditional. So A*norm allows, while N rules out, cases where S possesses epistemic reason for believing p, and S possesses a normative reason to believe that p, but S lacks a normative reason to get to the truth with respect to p. I don’t accept thesis N because I think that such cases are possible. For example, it might be that S possesses good evidence indicating that p is true, and S doesn’t care one way or the other about whether p is true, and S possesses a normative, non-epistemic reason to believe that p (irrespective of p’s actual truth-value).

    2. Note that I don’t hold that Adam must have a normative epistemic reason to believe M1, in order for Adam to possess a normative instrumental reason to believe that p. I claim that Adam must have an epistemic reason which supports belief in M1 in order for the first-order epistemic reason to believe p to be instrumentally normative, but I don’t mean to claim that the higher-order epistemic reason must itself be normative.

Shares
Verified by ExactMetrics