Symposium Introduction

It might seem odd to suggest that morality has a moral problem, but according to social-psychologist C. Daniel Batson (2016), that seems to be the case. In other words, despite the important work in moral theory from Aristotle to contemporary care ethicists, why is it that so often people fail to be moral? It seems that there is something of a translation (or transmission) problem such that even though we have numerous accounts of what goodness requires, rarely do those accounts succeed in bringing about good lives. Or, as Batson asks, why does morality so rarely get what it demands? In presenting things this way, Batson constructively, and non-reductively, positions moral philosophy as inevitably flawed unless it attends to the emerging data in moral psychology.

Although there are other philosophical takes on moral failure (see Tessman 2015), Batson’s approach is distinctive not only for its dependence on empirical data, but also for the way in which it positions moral life as a question of the relation between social values, emotions, motivations, and moral principles. Far from being a straightforward critique of moral philosophy as such, Batson’s text (along with his earlier work on empathy) helps to see the importance of philosophy in the first place as providing carefully articulated principles for moral social life. In this way, Batson offers what I would consider a psychological supplement that contributes both to the pragmatic question, “What is likely to make people more ethical?” and also to the phenomenological question, “What is the nature of morality such that it presents such difficulties for human social behavior?” In the present symposium, the contributors collectively address all of these components and implications of Batson’s work.

In the first essay, Christopher D. Merwin offers a social phenomenological consideration of Batson’s book by suggesting that from the outset Batson’s notion of “values” and “principles” requires a richer description of how such ideas get formed in the daily lives of social beings. Subsequently, Merwin contends that we should understand morality as always already plural: “moralities.” As such, the dynamics involved in moral life are, themselves, socially implicated.

The second contribution comes from Christy Flanagan-Feddon. In this essay, Flanagan-Feddon attempts to think at the intersection of the disciplines of philosophy and psychology in order to demonstrate that Batson’s account might benefit from a more expansive engagement with moral philosophers who have prioritized the relational dynamics of moral selfhood (viz., Emmanuel Levinas, Ludwig Feuerbach, Hannah Arendt, Judith Butler, among others). Flanagan-Feddon challenges what she takes to be Batson’s overemphasis on “intentional action-states” as missing the way in which morality might be a deeper aspect of social existence itself.

In the third essay, Lidewij Niezink extends the phenomenological approaches of Merwin and Flanagan-Feddon in a decidedly psychological, and applied, direction. Drawing on her own work in moral psychology, Niezink develops something of an account of phenomenological best-practices, as it were, for doing objective research into moral phenomena. Arguing that more qualitative empirical research would be a helpful supplement to Batson’s quantitative data, Niezink offers what she takes to be a better account of the moral lifeworld in which we could begin to get a better grasp on morality, and what might be wrong with it.

Finally, Mark Fagiano shifts from the phenomenological and psychological to the pragmatic in order to push back on what he locates as Batson’s ultimate desire to preserve “principlism.” Drawing on John Dewey and William James, Fagiano contends that moral experience is at least as important, and maybe more important than, moral principles. In this way, Fagiano moves in similar directions as does Flanagan-Feddon regarding the need to rethink Batson’s own conception of moral philosophy as too restrictive.

Ultimately, in this exciting set of exchanges with his critics, Batson is able to clarify and extend his position in ways that are provocative and promising for the future of moral philosophy, moral psychology, and moral life.

 

Works Cited

Batson, C. Daniel. 2016. What’s Wrong with Morality: A Social-Psychological Perspective. New York, NY and Oxford: Oxford University Press.

Tessman, Lisa. 2015. Moral Failure: On the Impossible Demands of Morality. Oxford: Oxford University Press.

Avatar

Response

Phenomenology, Our Shared Worlds, and Morality

One of the particularly challenging aspects of a philosophical analysis of morality is that morality is often interpreted and understood as a universal category, a univocal principle for normatively guiding our actions. Yet, paradoxically, the source of our moral values, that is, the place where we obtain this seemingly univocal principle, is both individually, interpersonally, historically, and socially determined. This is something of a conundrum. If morality is univocal in its scope and determination, then it should be universally similar across all cultures and peoples throughout history. Similarly, we should be able to easily access the moral success or failures of individuals in their behavior. Yet, this is clearly not the case. In recent years psychologists have begun to take up the analysis of the moral behaviors of individuals and to ask what the social and psychological factors are that make up our moral motivation and how it is that we know the right thing to do, but still choose to do the wrong thing.

In his recent book, What’s Wrong with Morality? A Social-Psychological Perspective, social psychologist C. Daniel Batson lucidly explores the ways in which our moral actions and behaviors often fall short of our moral principles and seeks a descriptive, rather than normative, explanation of how this is the case. Batson seeks to describe the moral behaviors of real individuals through numerous case studies and see how they succeed or fail to accord with moral principles. Batson’s conclusion, unsurprisingly, is that often our moral failures, or our “moral maladies” as he calls them, are due less to our own individual character or the circumstances and pressures which we face in our day-to-day moral activities, and more to “rationalization, self-deception, and moral hypocrisy” (228). Batson’s analysis is particularly important to philosophical discussions of morality because it considers actual descriptions and accounts of moral actors and the struggles they face, rather than armchair philosophizing. In this way, we may interpret Batson’s work along a phenomenological vein in that it attempts to understand morality through the lens of actual lived experience.

In what follows I would like to briefly critique Batson’s account from a phenomenological standpoint. My motivation for doing so is twofold. First, I take it as given that Batson’s analysis is a compellingly argued standard for a social-psychological understanding of morality. His text stands as an exemplar of how we can observe the interplay between moral behavior and moral principles in individuals. By drawing from the philosophical subdiscipline of phenomenology, my aim is to accept Batson’s argument and ask how we might take it further by making use of the rich history and practice of phenomenology. As such, my critique of Batson’s text is not a criticism, but an attempt to enlarge the scope of Batson’s examination. My second motivation is to ask whether or not phenomenological questions about morality can withstand the pressures of social scientific evidence. The first of these motivations I will address here, while the second will remain to be seen and is for social scientists like Batson to determine.

1. What’s Wrong with Asking What’s Wrong with Morality?

Batson’s text, particularly when applied to the United States, could not come at a more poignant time. Batson says at the outset that his task is to “consider morality not only as a solution but also as a problem” (Batson 2016, 3). Batson wants to understand just how it is that morality affects our behavior and to examine the range of motives and emotions, many of which, perhaps most, are themselves not intrinsically moral. The interplay between motives and emotions and their morality (or lack thereof) is especially pertinent in contemporary American culture and it is precisely this interplay which Batson seeks to examine in the text. In examining moral failures, that is, our oft-experienced inability to live up to our own moral standards, Batson sees, rightly so, that it is rarely the case that we are simply good people caught in a bad world (Batson 2016, 227). Instead our motivations and emotions, our behavior, is often vulnerable to “rationalization, self-deception, and moral hypocrisy,” and with nonmoral motives, emotions, and values playing much more significant roles in our behavior than our relatively weak moral ones (Batson 2016, 228).

The central question of Batson’s inquiry, which seems right, is to ask why morality doesn’t get what it demands? As a social scientist Batson wants to test a series of hypotheses and discover what symptoms lie at the heart of his diagnosis of what is wrong with morality. Aside from Batson’s own fourfold theoretical model, if there is a consistent metaphor guiding Batson’s reflection it is that of symptom, diagnosis, and disease and possible cure. Batson’s four-part doctor’s bag includes a methodology, apparently used across psychological and neurological research, of value → emotion → motivation → behavior in order to “consider the range of motives that might lead a person to act in accordance with principles of right and wrong conduct” (29). Batson proceeds skillfully and with a wide range of excellent case study examples to trace out the malady of morality through these four characteristics.

It is, however, here where Batson has already lost me as a philosopher and phenomenologist. Morality may indeed be principles of right and wrong, but I want to ask where these principles came from, what is their origin for the agent who has moral or nonmoral motivations? Moreover, is morality in fact a cross-culturally codified univocal principle of right and wrong behavior, emotion, and motivation? What I mean here is not whether morality is hard coded into the human experience, but how it is that we come to know principles of right and wrong in the first place. The easy answer is, our upbringing and environment. But these easy responses do not account for the complexity of human moral history, of the very complex ways that moral statements are shared and disseminated, reinforced, modified, ossified, and challenged. Batson, for his part, spends very little time on where our principles come from and instead focuses on the fact that we internalize principles as such (45–46). If there is anything that recent social justice movements, particularly in the United States, has taught us, it is that the principles themselves of right and wrong can, do, and maybe sometimes must, change based upon social, historical, and interpersonal factors. This is where I think that the conversation with social phenomenology may be fruitful for Batson’s project.

2. The Phenomenology of Shared Worlds

Perhaps the most well-known, although by no means only, social phenomenologist is Alfred Schutz. Similarly, the work of phenomenologist Max Scheler, whose work is in large part preserved and expanded upon by Schutz, also deserves special attention. Important for social phenomenologists is the role that time, intersubjectivity, and co-performance play not only in our social behaviors and actions, but also in the ways that our inner experiences are transformed through time and relation with others.

Part of the problem of defining morality as univocal is that it tends to cast moral principles in the light of unchanging laws about how we ought to act, without recognizing that what is understood by those principles of right and wrong themselves have a history and undergo transformations. Max Scheler, for example, provides a sophisticated and complicated phenomenological analysis of shame, including its relation to moral emotions, that is nevertheless and necessarily deeply rooted in the first decade of the 1900s in the German cosmopolitan culture of Berlin. Many of Scheler’s observations would be considered by us today as downright sexist or chauvinist, yet they are, as Scheler is at pains to point out, deeply temporally and bodily codified expressions of social moral behavior (Zahavi 2014, 114–18). Scheler, like any good phenomenologist, not only acknowledges his social temporal circumstances but uses them to ask about the difference between the structure, affect, and dynamics of shame compared to their social and intrapersonal expressions.

Schutz, following Scheler, two decades later in The Phenomenology of the Social World, understands that many of our deepest values, motivations, and moral principles come from our consociates, that is, our shared community of space and community of time. But importantly for Scheler and Schutz, and social phenomenology more generally, these communities of space and time also include communities of ideas, of historical receptions and interpretations, and even the ways in which understandings of these principles can be consciously (or unconsciously) changed. As John Drummond has rightly pointed out in his seminal collection Phenomenological Approaches to Moral Philosophy, one of the primary benefits of a phenomenological approach to morality is that it “makes possible a critical reflection both on the actions themselves and on the moral judgments we make about them and their agents. We can reflect on the rightness or wrongness of actions and on the correctness or incorrectness of our appraisals of them and of their agents” (Drummond 2002, 3). But this assessment of rightness or wrongness, rather than a view of morality from nowhere, instead takes into account how the agent, and intersubjective agents, of that life world understand by morality to begin with at a given time.

Because of phenomenology’s insistence on the first-person embodied standpoint, both singular and plural, it may be more fruitful to ask after how it is that we fail to live up to moralities, and not simply just a single univocal morality, and what the conditions are for a morality’s success or failure. A plurivocal approach, or a hermeneutic one, allows a rich methodology like Batson’s a broader conceptual framework. If we modify Batson’s fourfold model by adding: (engagement with) morality→ value → emotion → motivation → behavior, we may see a more multifaceted view of morality and its interplay with moral agents emerge. As moral actors, particularly in large multicultural and cosmopolitan societies, we are influenced not by a single morality, but many more claims. Where a phenomenological approach may be more helpful to Batson’s project is to acknowledge, as I believe Batson wants to, that there is within us as human beings a striving toward what we term morality (for an example of how this might be possible, see Kriegel 2008). The social, historical, interpersonal expression of how to understand what moral success or failure looks like, or what a morality looks like, is highly dependent on the first-person experience (in the many singular and plurals) of the moral agents involved. Batson’s descriptive project sits comfortably alongside phenomenology’s dictum that to be able to understand something, you need to understand what it looks like to others.

In contemporary America, social justice movements like #metoo have importantly come to the foreground of social consciousness and initiated discussions of not only what is considered right or wrong conduct, but also discussions of how and why the moral standards of the past may no longer be sufficient. This is not so much the case that we have universally all failed morality, or that we falsely believe we are “good people in a bad world,” but that morality is itself not univocal and static, or rather, what different persons at different times in different places in all of their subjectivity and intersubjectivity understand morality to be is something that must also be taken into account if we want to know why we fail to act as moral agents.

 

Works Cited

Batson, C. 2016. What’s Wrong with Morality? A Social-Psychological Perspective. Oxford and New York: Oxford University Press.

Drummond, J. J. 2002. “The Phenomenological Tradition and Moral Philosophy.” Introduction to Phenomenological Approaches to Moral Philosophy, edited by J. J. Drummond and L. Embree. Dordrecht: Kluwer.

Kriegel, U. 2008. “Moral Phenomenology: Foundational Issues.” Phenomenology and the Cognitive Sciences 7.1: 1–19.

Zahavi, D. 2014. Self and Other: Exploring Subjectivity, Empathy, and Shame. Oxford: Oxford University Press.

  • C. Daniel Batson

    C. Daniel Batson

    Reply

    Reply to Comment by Christopher Merwin

    I welcome not only Chris Merwin’s generous acceptance of my argument (something I wouldn’t do) but also his desire to take the argument further “by making use of the rich history and practice of phenomenology.” As the subtitle of What’s Wrong with Morality? says, mine is a social-psychological perspective. I hope this perspective adds to our understanding of what’s wrong with morality, but I’m not so delusional as to think that it offers all the insight we need. A more comprehensive view of our moral failures would need to take advantage of a whole host of other perspectives, including phenomenological ones.

    Chris specifically wants to enlarge the scope of my examination by asking both where our moral principles, standards, and ideals come from, and how they change. His focus is on social, historical, and interpersonal factors operating in “our shared community of space and community of time,” including “communities of ideas”—“historical receptions and interpretations”—that present us not with a “single univocal morality” but with the challenge of dealing with a multiplicity of moralities.

    Chris is right that my focus is on the psychological processes, especially the motivational and emotional ones, that all too often enable us to fail to live up to our personally held moral principles, standards, and ideals, whatever they may be. Being a social psychologist, I consider these psychological processes in the social context as perceived by the individual. Specifically, chapters 2 and 5 give attention to the socialization processes through which we acquire our moral standards, including their multiplicity and some of the psychological factors that work for and against moral change. Chapter 3 considers some key social and cultural pressures on our morality. My own research, however, has focused on how it is possible for us to fail to live up to our standards even when both we and our communities speak with a clear voice about what is right for us to do in a given situation—and when we aren’t under strong external pressure to do otherwise. Thus, my focus is downstream from Chris’s. As he says, our different foci are not in conflict but complementary.

    At the risk of introducing a discordant note into all this harmony, let me raise a concern about Chris’s suggestion near the end of his comment that where a phenomenological approach may be especially helpful is by acknowledging that “there is within us as human beings a striving toward what we term morality.” The blanket nature of this statement gives me pause. The striving he describes may be true for a given human being at a given point in time, but I doubt it’s always true for all of us—and for some of us, it may never be.

    How would we know if it’s true or not? This question shifts attention from the aims of a phenomenological approach to its method, which is often described as a focus on first-person experience and is often operationalized as careful, sensitive attention to how the experiencer describes the experience. To use this method to address such a question seems problematic. The problem is that when describing complex, multifaceted, multiply determined, and value-laden experiences—as our morality-relevant experiences frequently are—we often don’t know or aren’t willing to admit even to ourselves what’s really going on. So how can we trust first-person reports?

    To give an example that I used frequently in the book, think of John Dashwood’s experience in the first few pages of Jane Austen’s Sense and Sensibility as he decides how to fulfill the promise to his dying father to “do everything in his power” to care for his stepmother and stepsisters. From John’s first-person perspective, it seems that he honestly believes that the decision he reaches about what to do is moral—that it conforms to both his personal and his culture’s standards for right conduct in the situation. From our third-person perspective, we (and Austen) know otherwise. This example suggests to me that although a first-person perspective is valuable, and at times essential, internal states such as motives and values—including moral ones—often need to be understood in a larger context that includes not only antecedents but also actions. And sometimes, actions speak louder than antecedents, especially antecedents seen from a first-person perspective.

    Finally, let me express a regret. In addition to extending my analysis by use of phenomenology, Chris says that he also wants “to ask whether or not phenomenological questions about morality can withstand the pressures of social scientific evidence.” He then says he’ll defer to social scientists like myself to answer this question. For two reasons, I wish he hadn’t deferred. First, I’d like to know what phenomenological questions he has in mind. If I knew the questions, I might be able to suggest relevant evidence. Second, I think that either he or someone else more knowledgeable than I about phenomenology must judge whether the evidence provides pressure. I hope he’ll return to this issue at some point. I’d like to hear the verdict.

Avatar

Response

What’s Wrong with [Normative Claims about the Self and] Morality?

or, How Can Moral Psychologists and Philosophers Get Along?

I read Batson’s work with great interest, yet crafting this response has been quite challenging because of certain methodological differences between philosophy and psychology, and I’ve done my best to be cognizant of those differences. Batson writes that his study is focused on the moral lives of ordinary people, and not the ideals of saints, psychopaths, or philosophers (19). For the sake of both myself and of those reading this essay, hopefully the first and fourth categories have more in common with each other than he thinks.

Batson has given us an exhaustive account of the “wrongs” of failed moral behavior from a psychological standpoint, or how self-deception, rationalization, and even social warfare regularly erode our attempts at maintaining consistent moral standards in our behavior. Yet I think these points also demonstrate the inadequacy of viewing morality as a solely rational process, or at least in the sense that our rational faculties are defined in this project. Moral reflection goes “wrong” when we treat it as a solitary, singular endeavor, or bluntly, when we are allowed to stay in our own heads and the powerful mechanisms of rationalization and/or self-deception take over. The mind is a powerful thing and without some type of barometer that provides grounding and compels critical self-reflection, our moral reflections are rootless. When we ask questions about morality or what is wrong with it, inevitably we are asking questions about the self’s engagement with its world and the impact of the self’s actions on others. Moral failings occur when the self has reneged on its responsibilities to others in society. This barometer is not only a test of our morality, but also our humanity.

Certainly such broad philosophical questions go beyond the stated scope of Batson’s analysis, but interestingly as the book continues, we start to creep back into this territory (6). He tries to demarcate some of these methodological issues early in the introduction, explaining that his project relates to the dictionary definition of morality as conduct, or how “our morals affect our behavior,” particularly with reference to the different sources of moral motivation (3). In its reliance on empirical data he argues that his project is descriptive and not normative (6), yet the very framing of his question that something is “wrong” with morality—in terms of either motivation or execution, or both—is in fact a normative statement. Surely, in this kind of discussion of morality we do not need to go down the proverbial rabbit hole of why utilitarians do not agree with deontologists and vice versa, and their differences in how they define the good. In the spirit of Rawlsian pluralism, that conversation may no longer even be necessary.

Yet if we are to talk about a morality that has failed, that is inevitably a normative conversation of a different kind as it requires certain claims about human selfhood and what we should reasonably expect that we can and should achieve, and how we view the parameters of moral reflection going forward. Theorists in philosophy would associate these kinds of questions with meaning-determining normative statements. The term relates to Wittgenstein’s “language-game” theory, which argues that the meaning of a term relates to implicit rules for how the term is to be used. Again, this causes me to circle back to my initial claim that the very nature of this project (i.e., “what is wrong with morality?”) demonstrates its normative character. The very asking of the question requires us to expand the scope beyond what we are already doing to what we need to do and ultimately, who we are and the tools available to us to figure this out. The scope of moral reflection must be expanded beyond the self/ego’s understanding of moral rules. In fact, Batson’s many studies have shown us that this is precisely where our moral decisions go awry, as we use cognitive mechanisms to explain away the liberties we are taking in our execution of moral principles. As guides for action, moral rules can only be evaluated in the larger context of their practice, and this inevitably involves a deliberation about the self in the larger context of its world—and more specifically, what we ought to expect of the self.

Even Batson seems to recognize this on some level, as suggestions offered in chapter 8 on how to treat our moral maladies include prescriptions, which again relate to embedded normative claims about moral expectation (200). In spite of himself, his project veers into normativity and even further, larger expectations about the relationship between the self and its world. This is not to point out a shortcoming of his project but instead to praise it for its depth, although I might need to convince him that this is a good thing. Given that he has his own misgivings about his method and analysis, perhaps I can nudge him a little more to my side, or perhaps we can think further about where our disciplines come together (7).

Batson’s rejection of “normative” assertions seems to be associated with his lack of confidence in conclusions that use reasoning other than empirical data. Again this is a matter of methodological differences between disciplines, yet I think it is interesting to see how his analysis still opens the door for additional consideration of philosophical claims and methods, including phenomenological reflections of self and world and certain normative expectations that are already embedded in language.

The example of the high school student in Georgia at the time of desegregation is important for not only this text, but also as a means to demonstrate how some of these lines merge. In spite of his culture and background, the white student in the Coles study recognized that his friends’ racial abuse of the black student was wrong and told them to stop. He explained that he just saw a “kid,” presumably much like himself, and “something in me began to change” and he was compelled to defend the boy (221). As Batson explains, our goal for moral integrity is to internalize moral standards to the level of integration along the lines of Aristotle’s virtuous person or Colby and Damon’s moral exemplar (202). Yet Aristotle would explain the development of moral character as a process of habituation and action, and this student’s outspoken defense of the other was spontaneous and surprised even himself. Further, it was in the face of the situational pressure of the racist culture of his peers. I think the key to this story is the boy’s recognition that the black student was just a boy like himself.

The acknowledgment of his own moral obligation was made possible by seeing himself in the other person and vice versa. This is a very significant concept that relates to other points Batson explains in his research, namely his suggestion of perspective taking as a stimulus to moral integrity (117), and several discussions throughout the text regarding the use of a mirror in experiments (e.g., 108, 214). Batson argues that both of these strategies seemingly increased the subject’s self-awareness and therefore made them more likely to engage in morally sound behavior. Further, this recognition of self and other, and similarly, the moral obligations indicated by the other (and other-ing of the self), seems to touch on many of the prescriptions Batson suggests as treatment of moral maladies, including recognition of moral relevance, increasing perception, moral motivation, broadening one’s outlook, among others.

The mirror concept is not only an efficient strategy with reference to these empirically-justified postulates in psychology. There is a long history of figures in philosophy who explain similar concepts of the existential/ontological recognition of self through the Other and the resulting moral obligations. Batson’s analysis above leaves us with two great insights: one, morality as perceived through the self/ego’s cognition of rules is inadequate; and two, a key strategy in achieving moral breakthroughs is to undergo a reflective process of self-consciousness as facilitated by the relationship between self and Other. This opens the door for not only moral insight, but also insight into one’s own identity. Hegel wrote that “consciousness of an ‘other,’ of an object in general, is itself necessarily self-consciousness, a reflectedness-into-self, consciousness of itself in its otherness” (Hegel 1807/1977, 102).

Emmanuel Levinas furthers this basic model of self-other recognition to identify moral consciousness as a function of self-consciousness, in that the very notion of “subjectivity is the other in the same” (Levinas 1998, 25). This constitutes not only how we view ourselves in relationship to others, but even how we feel within our own skin: there is an aspect of our being that is fundamentally beyond being itself in terms of cognition, mastery, or definition. Levinas describes this faculty of otherness as not a stable give-and-take or “reciprocity,” but “restlessness” (Levinas 1998, 25) and it is ultimately the structure that signifies my “responsibility for the other” (Levinas 1998, 26). Think again about the example of the boy in Georgia: his defense of the other boy was not the result of a careful deliberation about rules and principles; it was in the instant moment of recognition of himself in his Other in a way that was immediate and pressing, but also uncertain and destabilizing, as he described it as “the strangest moment of his life” (221). This quality of restlessness resists the “moral myopia” Batson describes where we are all too quick to allow self-interest to cloud moral principles (120). When moral action is defined in clearly delineated moments of self-rule-action, the possibility of misfiring increases. But when it is an obligation that calls us, perhaps even disorients us, this is not so easily manipulated. Yet if we do not learn of our moral obligations through cognition and rules, then we also have to identify the tools we have available within experience that provide this ground for ethical awareness.

Ludwig Feuerbach also explained the dual nature of human self-consciousness, tying it to both our opportunity to realize our human potential and also our ethical obligations to other human beings. Following the aforementioned Hegelian model, he explained that what defines us as human beings is our ability to think in the “inner” and the “outer” or the “I” and “Thou”: “man is himself at once an I and Thou; he can put himself in the place of another” (Feuerbach 1841/1957, 2). Feuerbach explains how certain emotions and experiences act as phenomenological clues that make us aware of this self-othering aspect of our consciousness. These clues demonstrate the unusual ways in which intimate and personal experiences also distance the “inner” from the “outer” life and literally objectify the self to ourselves.

Similar to Levinas, Feuerbach explains selfhood as constituted in the fact that we are fundamentally vulnerable and capable of being affected, even within our own structure of self-consciousness. He also explains our ability to learn of this through sense-perceptibility (Sinnlichkeit) and feeling. Our self-consciousness is made possible through this exercise of self-reflection and reception, if not even passivity. We understand our humanity in the context of our characteristics and attributes, but when these characteristics are in effect, they demonstrate an autonomy beyond our intention or control: these “constituent elements of our nature” are also those “to which he can oppose no resistance” (Feuerbach 1841/1957, 2). In his description of falling in love, he writes: “which is the stronger—love or the individual man?” He also mentions similar experiences like being captured by a musical piece that moves us, or the notion of being lost in thought in a daydream. These aspects of human nature exceed our abilities of control and mastery, but nonetheless comprise the very source of our possibility for growth and self-transformation. For Feuerbach, to be “at once and I and Thou; he can put himself in the place of another . . . his species, his essential nature, and not merely his individuality, is an object of thought” (Feuerbach 1841/1957, 2). This reflective process of self-awareness that makes me aware of my own identity is fundamentally related to my relationship with other human beings and how I ought to act as a result. Feuerbach explained actions like violence or fanaticism as things that happen as the result of devaluing this common kinship among human beings. We are made aware of this process and our resulting responsibilities through reflection and relation; not clear cognition, self-mastery, and classification.

This is the aspect of Batson’s study with which I struggled the most. These figures in philosophy would describe moral responsibility as an inseparable aspect of our human identity. While we inevitably fall short with the execution of our moral obligations, the appeal to our being or nature is what makes us keep striving. Moral expectation is inexorably bound to ontological claims about who we are as persons, and we also have certain phenomenological clues as given through intuition, emotion, the face of the Other, and the like, that help to demonstrate our moral obligations. These clues demonstrate a similar process of othering that takes place within the self, but like the self-other discussion of the boy in Georgia, awareness only takes place as part of a reflective process between self and other, presence and absence. Further, on a “rational” level, the boy referred to the mores that he knew of being a product of the racist culture: the normalcy of the n-word, the expectations of his friends, and so on. He could profess to be a moral person broadly speaking, as could his peers, but the cognitive mechanisms of rationalization and self-deception could exempt him from treating the black student appropriately. However, the realization that was brought through by the reflective self-consciousness in seeing the other student as himself—even in spite of himself—impressed his moral obligation upon him and he defended the black boy. He identified no clear moral rules or obligations as the reason for his actions, but that “something in me began to change” in that moment of recognition and he reacted accordingly. Clearly this is an important anecdote for the book for the reasons mentioned above, yet it does not seem to meet Batson’s criteria for “moral” action as exclusively intentional and goal-directed behavior (19).

On some level, this seems to coincide with Hoffman’s notion of empathy-based morality, where empathy has the potential to transform abstract moral principles into “prosocial hot cognitions . . . thus giving them motive force” (219). Yet Batson also argues that altruism and morality have no necessary connection and might even compel us to immoral behavior (219). I do understand his concerns regarding altruism and partiality, in that if I feel a particular affection or affinity for a person or group it might cause me to bend certain standards to help them (33). However, I think even within his own examples given in this text, his rigid definitions of these categories undermine how other factors serve as key anima in moral action, such as emotions like empathy and concern.

As I read this book, I frequently found myself returning to the same concern: one of the main issues that is “wrong” with morality is how it is understood—and Batson’s own definition relates directly to the problem. His strict definition of actions as being moral only when they are intentional action-states that correspond with clearly delineated cognitive principles or rules is precisely what allows them to be so easily manipulated; one might say that rules are made to be broken. But when we look at moral actions and responsibility more broadly, in relationship to self-identity as well as the relationship between the self, world, and others, morality is not an option, but an aspect of both being and be-ing. It is part of my existential orientation in the world; to deny my obligation to the other, or ignore the phenomenological clues within experience that indicate the other, is also a denial of the most basic aspects of self. This concept of moral identity identifies the very notion of self in the context of its passivity and dependence on other things, both within oneself and Other. When morality is defined in the context of clearly defined rules of action—or only something we do—it sets up a structure of intention or option that I do not think is conducive to true moral thinking, or the internalization he discusses of the Aristotelian moral virtuoso. Also recall that Aristotle’s understanding of eudaimonia suggests that we find true happiness when we engage in the kind of actions that are conducive to our nature and the natural ends to which we are inclined.

What we learn from the examples of Eichmann, My Lai, and others are that we need to identify mechanisms that prevent our moral codes from going haywire and justifying terrible actions. The example of Eichmann is a prime case of morality’s failings. In interpersonal relations, he was perfectly pleasant and acceptable, a reasonable family man who worked hard and did his job, even if it was difficult. He was the “‘good man in a bad world’” that Saroyan discusses (11). Eichmann was not a sadistic madman on the fringe of society, but a quiet family man who in his own words, “did his job.” In so doing, one might even say that he employed an ethical system that appeals to the moral values of principlism: in this case, tradition, order, hard work, and respect of the law and authority (35). In other contexts, these moral principles might be described as reasonable and necessary components of a happy and stable society. But here, they were employed as the justification for one’s participation in one of the biggest atrocities of modern history. What was missing from Eichmann’s moral reflection was the aforementioned “barometer,” which does not allow the self/ego’s complex mechanisms to explain away bad actions.

In Hannah Arendt’s (1963) study of the Eichmann trial, she famously described evil as “banal” not to undermine the severity of the events of the Holocaust, but to highlight how it had to be subsumed within the “ordinary” aspects of society in order to operate. Behind the orders of the high-ranking officials who made the decisions were the thousands of workers like Eichmann, who actually had to facilitate the basic operations, the seemingly ordinary and everyday aspects of their work as train conductors, office bureaucrats, medical professionals, engineers, etc. Arendt was particularly interested in the way Eichmann forced himself to think in a way that blocked out these terrible events in spite of his obvious participation and complicity.

One of the examples Arendt provided was the “language rules” that regulated the terms used in all of the formal correspondence (Arendt 1963, 80). Rarely were the explicit terms of “killing” or “extermination” used to describe the activities of the Holocaust; instead the approved terms were such phrases as “evacuation,” “special treatment,” and even “change of residence.” In these events, language and thinking itself were hijacked so Eichmann and others could put on the blinders and exist in isolation, convincing themselves that they were doing their jobs in a vague “final solution,” rather than participating in acts of genocide and murder. When their actions were given alternate names, this obscured the true nature of what they were really doing, removing their interaction with the victims and finding ways to make the situation itself bureaucratic and ordinary. And the only way to make such an extraordinary situation ordinary is to hijack language so it stays within a very specific and immediate trajectory, and does not veer over to where it should—and where it would—if left in its own, which would illuminate the greater impact these actions had on others. It was not “killing” [innocent people], but implementing the “final solution”; at the trial Eichmann could not say that he was “not guilty” and leave it at that because he knew it was not true; he had to add the qualifier that he was “not guilty in the sense of the indictment.”

Again, this is where the significance of the above language rules is so telling: had the original terms describing the actions been used rather than their sanitized versions, thinking would have forced the players involved to face what they were doing. It also furthers the “banality” idea: this was not an unapologetic “evil” in the sense that they could be honest and open about what was going on for fear of a confrontation with one’s moral conscience. Instead, they deliberately changed the terms of language, and therefore how they thought about the situation and particularly the others involved, in order to establish an alternative narrative for what was actually taking place.

In Judith Butler’s (2011) analysis of Arendt’s book, she explained that Arendt had observed “a new kind of historical subject had become possible with national socialism” in which it was not necessary to “think reflectively about one’s own action as a political being, whose own life and thinking is bound up with the life and thinking of others.” Butler explained that while Eichmann knew what he was doing and engaged in “conscious activity,” it was not actual thinking in the fullest understanding of the term. She wrote that [Arendt] “insisted that the term ‘thinking’ had to be reserved for a more reflective form of rationality” (Butler 2011). This echoes Batson’s description of the accounts of the German Reserve Police Battalion 101, who said that “at the time we didn’t reflect about it at all” and “tri[ed] not to think, period” (12).

Arendt had significant criticism for Eichmann’s claim “that he had lived his whole life according to Kant’s moral precepts,” particularly because Kant’s moral philosophy and understanding of practical reason “rules out [the] blind obedience” that would be necessary to willfully participate in the system and ignore the larger framework of one’s actions (Arendt 1963, 120). This also reminds us of Kant’s first formulation of the categorical imperative, where he argues that we can only act according to maxims that could be regarded as morally universal. The judgment of an action is not only based on my assertion, but also the reflective deliberation about if this is proper and just for everyone else.

But in this deliberation, what is the assurance that someone will think beyond their personal frame of reference? As Batson noted, these perpetrators likely saw “themselves as highly moral people responding to the dictates of conscience” or orders, circumstances, etc. (14). But interpreting our moral dictates is not only a solitary cognitive process, as it also involves a process of reflection. As described above, Kant’s explanation of moral duty is grounded in the fact that we are all rational, and therefore deserving of the same treatment and the respect of our dignity and autonomy. But the way in which we test a potential maxim for action is through the principle of universalizability, where we essentially have to imagine if a reason for action would be valid in other circumstances. All of these concepts are directly related to the “reflective mode of rationality” Butler describes. Our moral rules—and perhaps more significantly, the understanding of how they are to be executed—do not exist in a vacuum. The morality of our actions can only be evaluated in the larger context of the world in which we find ourselves.

Butler argued that the type of thinking Arendt had in mind in this work is both an “exercise in judgment” and is “implicated in a normative practice.” In addition to Kant, this idea also relates to a number of other thinkers. In the Philosophical Investigations for example, Wittgenstein explained the meaning of language as part of a larger discursive process: “the speaking of language is part of an activity, or of a form of life” (Wittgenstein 1949/1997, 11, #23), or more recently, Robert Brandom argued in Making It Explicit that language is the effort to make explicit certain moral norms that are already implicit in our society. Brandom explained that “one cannot address the question of what implicit norms are, independently of the question of what it is to acknowledge them in practice” (Brandom 1994, 25). Discursive practice is engaged in a process of “deontic scorekeeping” that is inherently social and “understood in practices of giving and asking of reasons” (Brandom 1994, 141). The Nazi documents demonstrated how they obscured their normative commitments through the manipulation of language. The words that described what they were actually doing—forcibly and violently removing people from their homes, executing women, children, and the elderly on the spot, etc.—were changed to innocuous ones like “relocation,” “inspection,” and so on, in order to sidestep the obvious moral obligations and repercussions that result from violating these implicit commitments. The use of the term “killing” immediately invites one to tabulate the score based on context: was it murder, self-defense, etc. By changing the terms, the Nazis avoided playing the game and could employ their own moral principles of purity, prosperity, and the like at will, and without reproach. Ultimately, this social scorekeeping process that is part of our social-linguistic identity is what prevents us from evading moral responsibility due to the cloak of epistemic context, self-deception, and so on.

And this is why any discussion of morality is implicitly normative, and inevitably part of a larger deliberation about the relationship between self and world. To identify it exclusively as rule-following and action-states treats it as a solely individual endeavor and removes us from the larger mechanisms in which we are already embedded—self, world, and society—that provides the mirror that forces us to look at our moral decisions and compels us to act responsibly. Batson provides an excellent study regarding the ways we act immorally. Yet even within its strictly defined parameters, it does not end there, veering into these larger normative and philosophical issues in terms of considering how we might resolve these problems. Morality is not comprised only by rules and principles, but also their practice; it is not defined by abstract principles separate from the self, but a way of being. The animus of moral life relates to existential commitments that are already underway and part of the social and linguistic world in which we find ourselves and view ourselves. I wonder if there are ways to incorporate these concepts more explicitly into his study, or perhaps more of the psychological methodology into mine. Or following the model of Hegelian recognition-through-the-other, perhaps it is in the very studying of these other methods that we also learn more about our own.

 

Works Cited

Arendt, Hannah. 1963. Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Viking.

Batson, C. Daniel. 2016. What’s Wrong With Morality? A Social-Psychological Perspective. New York: Oxford University Press.

Brandom, Robert B. 1994. Making It Explicit: Reasoning, Representing & Discursive Commitment. Cambridge: Harvard University Press.

Butler, Judith. 2011. “Hannah Arendt’s Challenge to Adolf Eichmann.” Guardian, August 29.

Feuerbach, Ludwig. 1841/1957. The Essence of Christianity. Translated by George Eliot. New York: Harper.

Hegel, G. W. F. 1807/1977. Phenomenology of Spirit. Translated by A. V. Miller. New York: Oxford University Press.

Levinas, Emmanuel. 1998. Otherwise than Being or Beyond Essence. Translated by A. Lingis. Pittsburgh: Duquesne University Press.

Wittgenstein, Ludwig. 1949/1997. Philosophical Investigations. Translated by G. E. M. Anscombe. Malden, MA: Blackwell.

  • C. Daniel Batson

    C. Daniel Batson

    Reply

    Reply to Comment by Christy Flanagan-Feddon

    Reading this comment by Christy Flanagan-Feddon, I felt like a guest at a cocktail party who is approached by another guest and given a rather stern chiding, only to have to say, “Sorry, but I think you’ve confused me with somebody else.” Christy takes me to task for defining “actions as being moral only when they are intentional action-states that correspond with clearly delineated cognitive principles or rules.” I, too, would take me to task for such a definition, and can only apologize for my apparent lack of clarity. This definition is far more restrictive than mine.

    When defining morality in chapter 1 of What’s Wrong with Morality? I deferred to the dictionaries, which typically define morality as “moral quality or conduct” (not very helpful) and define moral as “1. Of or concerned with principles of right or wrong conduct. 2. Being in accordance with such principles.” I went on to explain that at least as I was interpreting the dictionary, “conduct” can refer to a specific act or to general character, and “principles” is an umbrella term that refers not only to “clearly delineated cognitive principles or rules” but also to principles, standards, norms, ideals, and virtues that need not be explicit, conscious, rational, or specific (see pp. 19–24). I was trying to cast a very large definitional net in order to include whatever guides to right or wrong conduct various people employ. Thus, although I did not explicitly describe as an example of morality the “reflective process of self-awareness that makes me aware of my own identity [and] is fundamentally related to my relationship with other human beings and how I ought to act as a result” that Christy describes, I see it as falling well within the purview of my definition.

    Perhaps her objection is that my definition also allows for many other approaches to morality, including the one she rejects. The dictionary sets no bounds on the nature or content of moral principles. It’s in this sense that I say my definition is descriptive not normative—I want to include whatever guides to right conduct a given person endorses and employs, not only those with which I agree. My interest is in why we so often fail to pursue our own moral standards and ideals, whatever they may be.

    After this awkward first exchange, do Christy and I have anything more to say to one another? I think and hope we do. She rightly notes that although I endorse the standard explanations for why we fail to act as we feel we ought—personal deficiencies (chapter 2) and situational pressures (chapter 3)—I am particularly interested in going beyond these to look at motivational and emotional processes. I suggest that these processes, accompanied by rationalization and self-deception, contribute to our moral failures because we often value our morality extrinsically rather than intrinsically. Further, I at least imply that these processes are a potential problem for any form of morality. So my ears prick when Christy speaks of a “quality of restlessness [that] resists the ‘moral myopia’ that Batson describes.”

    She suggests that “when moral action is an obligation that calls us, perhaps even disorients us, this is not so easily manipulated,” and then goes on to identify “the tools we have available within experience that provide this ground for ethical awareness”—specifically, our ability to put ourselves in the place of another. Hearing this, the scientist within me wants to know whether this approach to morality really is less vulnerable to the motivational and emotional processes that contribute to our moral failures than are other approaches. Does it, for example, discourage the moral hypocrisy—motivation to appear moral while, if possible, avoiding the cost of actually being moral—I discussed in chapter 4?

    To know the answer to this question, it seems necessary to either identify or create people who approach a given moral situation in the manner she describes, then look at the effect on their action of providing them “wiggle room” (the chance to appear moral without having to actually be moral). The research I can think of that comes closest to doing this is presented near the end of chapter 4, where two experiments are described (117–20).

    The first of these experiments used a task-assignment procedure that colleagues and I have employed when testing for the existence of moral hypocrisy. In this procedure, each research participant is given the chance to assign him- or herself and another same-sex participant to tasks. One of the tasks is clearly preferable to the other (i.e., one has positive consequences—raffle tickets for a $30 gift certificate; the other has neutral consequences—just information—and is described as rather “dull and boring”). The other participant thinks the assignment is being made by chance. To provide wiggle room, participants (who are alone when they make their assignment decision) are told that most people think the fairest way to assign the tasks is to give each person an equal chance at the more desirable one by, for example, flipping a coin—and participants are provided a coin to flip if they wish. Evidence for moral hypocrisy appears when, among those who choose to flip the coin (appearing fair), significantly more than 50 percent assign themselves to the positive-consequences task (indicating that some avoided the cost of actually being fair).

    The tool that Christy identifies as providing the ground for ethical awareness is what social psychologists like myself call an imagine-self perspective—imagine yourself in the other’s situation. To test whether this perspective reduces moral hypocrisy, the first experiment had some research participants perform a brief imagination exercise prior to making their task-assignment decision. They were asked to “imagine yourself in the place of the other participant” for one minute, then write down what they had imagined. Other participants didn’t perform this imagination exercise.

    Results of this experiment revealed that the imagine-self perspective had only a limited (and not statistically significant) effect on the fairness of the task assignment. This perspective somewhat reduced the percentage assigning themselves the positive consequences after flipping the coin (67 percent compared to 85 percent for those not asked to do an imagination exercise), and it somewhat increased this percentage among those who didn’t flip (89 percent compared to 64 percent).

    Why did the imagine-self perspective have so little effect? Are the scholars and practitioners who have extolled the virtues of this form of perspective taking simply wrong? Perhaps not, at least not entirely. The task-assignment procedure used in this first experiment poses a moral dilemma in which the pre-assignment plight of both participants is exactly the same. Each faces the prospect of being assigned to either the more desirable or the less desirable task. In such a dilemma, to imagine myself in the place of the other participant, which is the same place I’m in, may not lead me to focus on the other’s interests, stimulating moral integrity. Instead, it may lead me to focus even more on my own interests. This may be why seeing myself in the other did little to reduce moral hypocrisy.

    If the limited ability of an imagine-self perspective to reduce moral hypocrisy was due to both participants having the same pre-assignment plight, then an imagine-self perspective should be more effective when the other’s initial situation is worse than my own. When the other is clearly disadvantaged compared to me, imagining myself in his or her place may provide new insight into what it’s like to be in such a position, and an imagine-self perspective may stimulate moral integrity. For example, when considering whether to vote for an increase in my own taxes in order to fund a job-training program for the unemployed, to imagine myself in the place of someone without work may stimulate moral action.

    Pursuing this logic, participants in the second experiment learned at the outset that they had been randomly assigned to an “asymmetrical condition” in which they would receive two raffle tickets for each correct response on their task and the other participant would receive nothing for a correct response. The participants were then told that if they wished, they could switch to a “symmetrical condition” in which each participant would receive one raffle ticket for a correct response.

    Participants who, before making the decision about whether to switch to the symmetrical (1-1) condition, imagined themselves in the place of the other participant were far more likely to make the switch (83 percent) than were participants who made the decision without the imagination exercise (38 percent). So, in this situation, an imagine-self perspective may indeed have reduced moral hypocrisy and stimulated a desire to be truly fair—moral integrity.

    But there is a second possibility. Perhaps imagining oneself in the other’s situation made the fairness of the symmetrical condition so obvious that it eliminated wiggle room, rendering it necessary to opt for the switch in order to appear moral. If so, participants who imagined might still be motivated by moral hypocrisy. We need further research to know which possibility is correct.

    These two experiments only scratch the surface of what we need to know about the effects on moral motivation of putting yourself in the other’s shoes. Still, I hope they’re a start down a research path that philosophers and psychologists can—even should—walk together.

Avatar

Response

A Phenomenological Research Approach into the Psychological Essences of Morality

It is not easy to summarize and develop the vast theory of morality and yet Dan Batson has done exactly that with ease and eloquence, as if for him, it’s all child’s play. Yet, he cautions us many a time, lacking research and data to drive all his arguments home, insisting that the framework and theory are a first sketch, not a finished canvas, and spurring us forward to gain more insight into the fascinating paradoxes of the human moral mind. And so we must. But before we do, we might consider whether the quantitative psychological research approach, especially in the controlled experimental settings of the laboratory, will give us all the insights we are looking for.

Batson depicts a rich picture of human morality and claims that we are not merely good people in a bad world. His question is not whether we are moral (most are most of the time) but how and why (18). And in doing so, he is not after a prescriptive approach of morality (determining which moral standards are “right”) but sticks to a descriptive approach.

Like in his work on the empathy-altruism hypothesis, Batson’s main focus is on goal-directed motives, rather than behavioural outcomes. What he is after is to understand the underlying psychological process that leads to moral behaviour. According to him, morality depends on the interplay of value—emotion—motivation—behaviour.

Two types of goals and four classes of motivation play an important role in the picture that is being painted. Ultimate goals—the value states we seek to obtain or maintain—define motives and all motives have their own ultimate goals. Instrumental goals serve as stepping-stones to ultimate goals. Unlike ultimate goals, instrumental goals are induced by extrinsic values which are valued as means to other ends. Only the ultimate goal of promoting a moral principle (i.e., being fair or doing good) is a truly moral motive according to Batson, and induced by (violations of) intrinsic values.

At the same time, both types of goals can have unintended consequences and acting morally can be one of them. It is for this reason that Batson stresses the importance of focusing on goal-directed motives instead of behaviour or consequences. The classes of motives Batson distinguishes are egoism/self-interest (the desire to increase one’s own welfare), altruism (the desire to increase another person’s welfare), collectivism (the desire to increase the welfare of a group), and moral integrity or principlism (the motivation to uphold a moral ideal). Only this last motive is considered a moral motive by Batson. But principlism does not seem to show its face frequently in the human motivational repertoire.

Batson also adds emotion to the model. He describes the differences between end-state emotions and need-state emotions. End-state emotions are triggered when one obtains or loses a valued state. Need-state emotions heat up the process and express our care. They appear when there is a discrepancy between a current and future state. Batson defines moral emotions more precisely than others have done so far: moral emotions should arise in response to the violation of a threat to some personally valued moral standard, principle or ideal. They are need-state emotions which produce the truly moral motivation to uphold the threatened standard, principle or ideal. End-state moral emotions, such as happiness at seeing a standard upheld, exist too but don’t directly evoke motivation to produce moral behaviour.

A lot of what looks like moral behaviour at first glance, turns out to be moral hypocrisy. And hypocrisy is fed by our capacities to not only deceive others but also deceive ourselves. What becomes clear is that we want to appear moral, not only to others but also to ourselves.

Apparently, we do value morality. The question is: why do we fail to act in accord with the moral principles or ideals we embrace, even when we know we should? This question is vast, as is the theory developed in this book, and it merits deep investigation.

I am not the first to notice the possible limits of the studies conducted so far which underpin some of Batson’s major points and questions. Most of the studies which Batson describes in detail are experimental studies of narrow populations (mostly undergraduate students) in controlled laboratory settings. And as far as Batson is concerned, many more of these controlled experiments need to be done to, for example, disentangle moral emotion from other types of emotions, which are in fact not moral.

An example of a possible, but not necessarily moral emotion would be anger. Moral anger, according to Batson, is “anger provoked by the perception that a moral standard (principle, ideal) has been or will be violated” (156). This anger is different from personal anger (anger due to not being able to secure our own interests) or empathic anger (anger when a cared-for other’s interests are being harmed). And it is exactly here, in the mentioned studies of anger, that one can see the limits of the laboratory experiments. Although the studies report statistically significant differences in self-reported anger in different fairness conditions, the anger itself is low to neutral in two of the three reported studies, with means ranging from 1.31 to 3.23 on a 1 (not at all) to 7 (extremely) scale. The one study where the reported anger at least passed the middle of that scale (ranging from 2.68 to 4.28) was in a study on anger at the torture of a cared-for or non-cared-for other. An explanation might be that in this last study, the stakes were a bit more morally tasking and thus the situation came a little closer to “lifeworld” moral dilemmas than the distribution of raffle tickets or the assignment of virtual reality vacations. And it is in these day-to-day dilemmas, where stakes are real, and possibly high, albeit complicated, that we want to know how and why we can uphold our moral standards.

I recently encountered such a complicated situation myself, studying empathy among service delivery personnel in South Africa. One of the people with whom I spoke, described a critical incident of being the main professional caretaker for a patient with a far progressed form of lung cancer. Her patient, a grandmother with several adult children and grandchildren, living together with her extended family, was in a stage where she had little physical strength left. Unfortunately, her family did not seem to recognize the severity of this situation and insisted on the patient keeping on carrying out her household chores. Specifically, a daughter and pregnant granddaughter, who was—according to my interviewee—in perfect physical condition herself, insisted that the patient kept preparing them their meals. The daughter was working during the day and the granddaughter felt incapable because she was pregnant and needed rest. My interviewee expressed a strong anger when describing this situation and she could clearly observe how this anger came to be: she was angry at the daughter and granddaughter for two separate reasons. The first reason she expressed had to do with them harming her cared for patient (empathic anger), and the second reason she mentioned came out of her conviction that no children should ever be treating their (grand-)mother that way (moral anger). I expect that our behaviour in morally challenging circumstances is often fuelled by multiple motives. And if it is true, as Batson expects, that a combination of factors is to blame for our lack of moral behaviour, then we need to study them together. In the remainder of this writing I would like to suggest a way in which we might be able to do so.

One method of inquiry which seems particularly valuable to me in the attempt to get an in-depth understanding of the psychological moral processes underlying moral behaviour, is the descriptive phenomenological method for qualitative research as introduced by Husserl (1962) and developed for psychological research by Giorgi (2009). This method focuses on the structure of experience, the organizing principles that give form and meaning to the lifeworld. It attempts to describe the essence of experience through the process of eidetic reduction which is reduction to the eidos or essence. Essences concern the a priori, essential structures of subjective experiences or “that without which an object of a particular kind cannot be thought, i.e., without which the object cannot be intuitively imagined as such” (Husserl 1973, 341).

Moral motivation is a potential phenomenon, in the sense that it can be accessed as a lived conscious experience (Moran 2000). And although Batson shows us that we can question that very fact, pointing out the often unconscious or semiconscious way we operate on moral grounds, I have no doubt that when one deliberately turns one’s attention to moral incidents, a conscious, and often even embodied experience occurs. In approaching research with the descriptive phenomenological method, we would not observe moral motivation from an objectivistic analysis, but from the direct experience of the agent facing its moral lifeworld. In my humble understanding, moral motivation is in essence a deeply personal and subjective reality, which varies not only with character and over time but also depends largely on the right social circumstances.

Upon a closer look at Batson’s exploration into moral motives, it becomes clear that he asks questions of a qualitative, rather than a quantitative nature. Batson is not asking “How much morality occurs?” or “How many people are guided by moral principles?” Instead he asks “What is it like to experience such a phenomenon?” and “How does it happen that moral ideals do not translate themselves into moral behaviour?” If one asks a qualitative question, wouldn’t a qualitative method of research be most appropriate?

We can measure the extent to which someone is intensely angry for violating one’s personal interests, or angry because one’s values or principles are not respected. Yet that intensity dimension is telling us very little about the difference between these two types of anger. By following concrete descriptions of the experiencer within his or her moral lifeworld, a researcher has the possibility to observe the inner dimensions of that anger. One deals with human beings as humans, not reducing them to the level of “things” by measuring determined reactions from the perspective of an “independent” observer. Instead, the researcher opens up the meaning of intentional responses from the perspective of a participant observer, following the experiential flow of the person being observed. By collecting a great number of variations of these “flows,” Husserl (1983) believed that descriptions can be raised to an eidetic level, with a series of ideas, essences, or invariant meanings capturing most of the variations of these flows (Giorgi, 2009). Just as in experimental research settings, specific conclusions can be drawn about the phenomenon in question.

It would take too long to explain the full method here, but let me give a short overview of it and point out where I see the possibilities that make it particularly well suited to the questions Batson asks. First, in designing the research, a researcher needs to take into consideration how a phenomenon is spontaneously lived in the lifeworld. This determines the circumstances in which one wants to question the phenomenon. The research situation needs to be as close as possible to this lifeworld situation. In other words, a researcher wants to be looking at moral motives in situations as close as possible to where those motives naturally occur, or perhaps, could occur.

Without a doubt, phenomena are complicated in natural settings, which makes it often difficult to know the role of separate variables. On the other hand, laboratories can be controlled to the extent that a study participant can find itself in a situation where events take place that do not happen in daily life. What helps, is to look for situations where the phenomenon stands out in a pronounced way to the participant. Actual experiential reliving the experience is the most essential point of departure and finding this experience to recount is not a difficult task. We can simply ask a participant to remember one of the last times that he or she experienced the phenomenon of interest. The retrospective description (not explanation nor interpretation!) becomes the raw data for the research.

It is the role of the researcher to keep the participant on track in describing the experience of the phenomenon in all its possible aspects. It is important not to lead the participant, attempting to have them say things a researcher is looking for in the data. Yet a researcher has to interrogate the phenomenal experience in order to tease out all aspects under study. Directing the participant helps to return to detailed descriptions of the experience itself, rather than generalizations or opinions and attitudes. The aim of the research is not to deliver N=1 case studies but to analyse meaning units to a level of abstraction where it has applicability to situations different from the one in which it was obtained. Therefore, the experience needs to be described as concrete and detailed as the participant can come up with.

The basic question can just take the form “please describe for me a situation in which you . . .” Depending on what is being researched, this question needs to be specified. One can for instance ask about moral failures such as lying, cheating, or standing by while somebody else gets harmed. One can also ask about moral success, such as times when the participant stood up for somebody else, had a chance to break the law and chose not to, or acted fairly when distributing something between him- or herself and others. The interviewer guides people to reexperience the inner processes leading to possible emotions, attitudes, intentions, and behavioural outcomes. On the way, many of the questions Batson needs answering can be clarified: Am I aware of my values or principles in this specific situation? Did I experience emotions and if so, of what form? Do I have a physical memory of those emotions? Where in the body are they taking shape? What motives did I have to behave the way I did? What was the reason to engage or not? In this descriptive journey, there is no need to suggest to the observer or communicate about expected results. The aim of the research is to follow the interviewee’s own line of thoughts along the lines of the phenomenon in question. A researcher is not to observe and infer, but to listen, describe, and condense. The description sought is one that is as faithful as possible to the lived through event. The basic questions to be asked of consciousness are “What happens?” and “How does it do so?”

By keeping participants naïve to phenomenological concepts and theories, and letting them describe their experiences from their lifeworld, from within the natural attitude, the researcher reduces the chance of bias. The participant does not know what the researcher is seeking and thus does not know how to “please” the researcher. This helps to relate the experience in a straightforward way. In order to claim that the interview is phenomenological, the researcher has to assume the attitude of phenomenological reduction (Giorgi 2009). This means to take everything in the raw data as experienced by the participant and to make no claims as to whether events really happened as described. Past personal experience and knowledge of the researcher needs to be bracketed, both in the process of interviewing as well as in the analysis of data afterwards.

The data we can obtain through this means of investigation is vast. Yet one has to of course be careful to not try to research the entire moral theory within a single study. As with lab experiments: the better we articulate the limits, the stronger the research. At the same time, certain questions which would have to be drawn apart in experimental studies, will occur naturally together in this type of qualitative study. Take, for example, “wiggle room.” We do not have to experimentally offer “wiggle room” or not to see its effects on hypocrisy. We can explore whether, where, and how the participant experienced any wiggle room and of what nature. We can then hear how that wiggle room was perceived and “used” in combination with self- and other-deception.

Of course, the account can be forgetful and/or distorted in other ways, but every research approach has its vulnerabilities. I think this approach is less limiting to investigate the claims of the interplay of value, emotion, motivation, and behaviour within the moral lifeworld than would be possible in controlled lab settings. Also, Giorgi (2009) points out that the subjective distortion itself might not reveal much about the objective event as it has taken place, but, given that the account is honest, it reveals even more about the psychological reality. And it is this reality that is of interest to test the theory Batson proposes. As is often the case, the best would probably be to have both research approaches inform and reinforce one another.

Analysing the data means looking for higher-level eidetic invariant meanings in the structure of the concrete experiences. One is looking for a “central tendency,” a common meaning in the variation of meanings given in the raw data. The claim that is being made is that the structures obtained are general in the sense that the findings transcend the situations in which they were obtained. This phenomenological insistence on eidetic intuition into essences leads to more substantial generalized findings and thus qualitative analysis with stronger intersubjective findings (Giorgi 2009). And just to be clear: using this methodology, we are looking for psychological essences, not philosophical ones.

A first step for the researcher is to let go of the natural attitude of daily life and assume the phenomenological attitude. This means that one starts to look at the world from the perspective of how objects are experienced by consciousness, no matter whether or not the objects are the way they are experienced. The facts within this experience get transformed to generalized meanings that show the situation as it is for the experiencer. In this specific case, the meanings a researcher is looking for concern motives and emotions within the psychological process underlying morality.

Second, the researcher determines what is essential about the object. What makes this object a specific example or instance of the type of phenomenon it is? For phenomenology, the essential characteristic has to be intuited (seen by being present to) and described. This seeing occurs through the method of free, imaginative variation.

Data is read to first obtain a sense of the whole and then to be divided in meaning units by marking the raw data every time the researcher notices a significant shift in meaning. Meaning units are arbitrary in the sense that they serve the purpose of making the description manageable. These natural attitude expressions are then transformed into phenomenologically, psychologically sensitive expressions. The psychological dimension of the experience has to be highlighted. On the basis of the transformed meaning units, the general structure of the experience appears. And because one does not describe theories or hypotheses, but only descriptions of findings, the results imply strong knowledge claims.

Importantly, within the descriptive approach, the essence is not interpreted but precisely described. A researcher does not add, nor subtract from what is expressed, but instead stays strictly with the given. If there are gaps in the results, more data is needed. Also, past knowledge and expectations need to be bracketed to make sure that the researcher is not evaluating the present in terms of his past experiences. Therefore, it is important to shift attitude in such a way that one can be fully, attentively present to an ongoing experience, rather than habitually present to it. When the essence is discovered, it is carefully described, including its relationship with other phenomena (when possible). Of course, the method of analysis itself also needs to be described to great detail, so other researchers can replicate the study or reanalyse the findings. It is therefore helpful, to lay out a detailed coding structure along the lines of the interview and the research questions.

The resulting picture is intricate, detailed, and most importantly, based solely on the psychological reality of reliving our moral lifeworld. I believe that it has the capacity to clarify many of the processes underlying Batson’s questions, such as the influence of internalization to the level of introjection versus integration, or whether principles of interpersonal morality are valued extrinsically, used instrumentally, and as a result, are a source of moral hypocrisy while propriety principles are more likely to be valued intrinsically, endorsed automatically and uncritically, and are thus a source of moral integrity. There are, to my knowledge, very few social psychological studies conducted this way. Yet with so many open-ended questions, and in laying out such an elaborate research agenda, Batson, and hopefully other researchers in the field of morality, might consider approaching the qualitative part of the research in this rigorous, and no less scientific, qualitative way.

 

Works Cited

Giorgi, A. 2009. The Descriptive Phenomenological Method in Psychology: A Modified Husserlian Approach. Pittsburgh: Duquesne University Press.

Husserl, E. 1962. Ideas: General Introduction to Pure Phenomenology. Book 1. Translated by W. R. B. Gibson. New York: Collier. Originally published in German, 1913.

———. 1973. Experience and Judgment: Investigations in a Genealogy of Logic. Translated by J. S. Churchill and K. Ameriks. Evanston, IL: Northwestern University Press.

———. 1983. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy. Book 1. Translated by F. Kersten. The Hague: Martinus Nijhoff. Originally published in German, 1913.

Moran, Dermot. 2000. Introduction to Phenomenology. London: Routledge.

  • C. Daniel Batson

    C. Daniel Batson

    Reply

    Reply to Comment by Lidewij Niezink

    Lidewij Niezink expresses doubt that controlled laboratory experiments such as those I report in What’s Wrong with Morality? will give us the insights into our moral failures that we seek. As an alternative, she proposes we turn to the descriptive phenomenological method developed by humanistic psychologist Amedeo Giorgi. Specifically, she suggests that we “simply ask a participant to remember one of the last times that he or she experienced the phenomenon of interest”—“one can for instance ask about moral failures such as lying, cheating, or standing by while somebody else gets harmed” and get participants to report whether in these experiences they were aware of values and principles, whether they experienced emotions and if so of what form, what motives they experienced, and so on. She adds that while doing this, researchers should bracket their own interpretations, hunches, and hypotheses about the psychological processes involved in order not to give intentional or unintentional cues as to what participants should report.

    I find myself reacting to these suggestions with doubts of my own. First, I doubt that her phenomenological method will achieve its goal because—echoing what I said in my reply to Chris Merwin—even if we can instill in participants a sincere desire to report accurately and honestly, at least some participants may not be aware of or willing to admit (even to themselves) what their relevant values, emotions, and motives were when making a given moral decision. Second, despite a researcher’s best intentions, he or she may not be able to fully bracket expectations and interpretations.

    Further, even if this method were to produce a deep, rich, and accurate description of the experience of moral failure, I doubt that the description will, by itself, shed clear light on why we often fail to live up to our principles, standards, and ideals. My guess is that it will instead suggest a number of possible answers to this question. That is, it will provide a set of hypotheses. How do we know which one or ones are correct? Is, for example, my reported anger at a person who acts unfairly moral outrage (anger that the standard of fairness was violated), personal anger (anger that I was hurt), or empathic anger (anger that someone I care about was hurt)? Or, when I act fairly, is my ultimate goal to be fair (to uphold the standard as an end in itself) or is it to appear fair while if possible, avoiding the cost of being fair (to uphold the standard in order to gain the personal benefits of seeing myself and being seen by others as fair)? Unless I misunderstand, the phenomenological method is not designed to answer such questions. Indeed, it carefully avoids them.

    The best method I know for testing hypotheses like these is to draw inferences about research participants’ mental states from the pattern of behavior across a carefully selected set of experimental conditions. I say this despite agreeing with Lidewij that experiments, especially laboratory experiments conducted on a single population such as university undergraduates, are artificial constructions that fail to capture the richness and variety of real-world moral encounters. Where she and I disagree is that she considers this artificiality to be a major limitation of experiments, whereas I consider it to be their major virtue.

    Experiments can be thought of as caricatures. A caricature isn’t a mirror of reality but an intentional distortion designed to emphasize essential features. Yet, if done well, a caricature reveals reality better than a mirror because the essential features stand out. We create experimental caricatures with a specific purpose in mind—to test one or more hypotheses about why some natural phenomenon occurs (in the present case, our moral failures). The hypotheses dictate the essential features that must be included in the caricature (e.g., the different conditions that would allow us to infer the nature of the anger at unfairness, or to infer whether the ultimate goal of acting fairly is to be fair or only appear fair).

    In experiments with human participants, nonessential features are excluded by one of two methods. Personal (dispositional) nonessentials are neutralized within the limits of chance by random assignment of participants to the different experimental conditions. Environmental (situational) nonessentials are neutralized by holding them constant (e.g., presenting all participants with the same unfair act, or with the same opportunity to act fairly). These experimental controls allow us to focus on what’s essential (e.g., how reported anger changes, if it does, when the person treated unfairly is oneself, a cared-for other, or a stranger; how acting fairly changes when one is given sufficient “wiggle room” so that it’s possible to appear fair without having to actually be fair).

    To say that experiments are caricatures means that they aren’t part of the naturally occurring “real world.” The constructed conditions aren’t likely to be part of our lived experience outside the lab. Isn’t this a problem? Only, I think, if the lab caricature fails to confront participants with the essential, hypothesis-relevant features of the real-world experience.

    In the experiments described in What’s Wrong with Morality? participants aren’t asked to imagine some experience and report how they would feel or act, as are participants in so-called scenario studies. Instead, by using deception, the participants actually undergo the experience. They are led to believe that another person really has treated them or another person unfairly—or that they really are assigning themselves and another participant to differentially desirable tasks. A cover story is provided that gives a plausible reason for what they experience yet hides the true purpose of the research. Through use of these deceptions, which are revealed and their purpose explained in a one-on-one debriefing at the end of each experimental session, it’s possible to avoid the problems noted earlier of participants either not knowing or not being willing to report their true reactions. Participants don’t report; they react to what they think are real events. And to prevent our experimenters from intentionally or unintentionally cueing participants how to respond, the experimenters are kept unaware of each participant’s condition until after all measures are taken.

    I think that as long as the essential conditions needed to make a valid inference are present and our research participants experience the conditions as real, the lab’s artificiality is fine, even desirable. Its controlled artificiality allows us to create a caricature that shows the nature of the relevant internal state more clearly than does the real world.

    But, you may ask, what about the generalizability of our experimental findings to other populations? To create the necessary features of the caricature, experimental procedures are almost always constructed with the specific research population in mind, and these procedures may be inappropriate for other populations. For example, we used raffle tickets for a $30 gift certificate as our positive consequences. We knew from pretesting that these tickets were desired by our undergraduates. However, these tickets almost certainly wouldn’t be of interest to middle-aged executives. Given this, can conclusions drawn with the particular procedure and population be applied to other populations?

    Without running an appropriately adapted test on the other population, we can’t be sure. Still, tentative generalization seems justified as long as there’s no good reason to think that the psychological processes in question differ across the populations—that is, no reason to think, for example, that members of the different populations feel differently when they see another person unfairly favor him or herself, or that members’ motivations for treating a stranger fairly differ. There are, of course, conspicuous cases in which generalization later proves premature—the initial use of blood transfusions without attention to blood type being one. Clearly, generalizations should be made with care and caution.

    Is there no role for the phenomenological method Lidewij advocates in the research process I’ve described? I think there is, but not at the hypothesis-testing stage. Rather, her phenomenological method may be valuable at the hypothesis-generation stage. To have rich, deep insight into the phenomenon in question seems essential if we are to generate plausible explanations to test. Let me add, however, that phenomenology isn’t the only promising source of such insight. Literature, drama, history, the news, and so on—as well as observation of and reflection on our own lives and the lives of those around us—may be less formally structured sources of insight, but still highly valuable ones. And, whatever the source, our explanations—hypotheses—need to be tested. For that, I don’t think you can beat a well-designed experiment.

Avatar

Response

What’s Wrong with Principlism?

Daniel Batson demonstrates once again why he is one of the greatest social psychologists of our time. Over the last forty years, Batson has employed various empirical methods and has drawn from a variety of different academic disciplines to craft his works. This pluralistic approach is also noticed in Batson’s consistent habit of noticing that scholars operationalize the terms they use in different ways. This is recognized, for instance, in Batson’s writings on empathy. Unlike other theorists, Batson recognizes and embraces the fact that there are a number of “things called empathy” (Batson 2009, 3–15), and he defines empathy primarily as empathic concern, namely “an other-oriented emotional response elicited by and congruent with the perceived welfare of a person in need” (Batson 2014, 41). This current work on what Batson calls our moral maladies is similarly careful. In chapter 1, Batson begins by stating precisely what he means by a number of terms that are often employed to signify different things. Perhaps most importantly, he stipulates what he means by “morality” by adopting two dictionary definitions of the word “moral,” that is: (1) of or concerned with principles of right or wrong conduct, and (2) being in accordance with such principles. He then offers nine points of clarification and elaboration about these definitions.

I find such caution to be somewhat rare among social scientists who often assume uniformity concerning the meaning of words. Clarifying what one means by the terms he/she uses and noting the methods or theories one intends to employ is often called starting points for investigation. By “starting points,” I mean an accepted (sometimes unchallenged) belief, inclination, or understanding that shapes and guides the structure of an investigation or theory. As Lidewij Niezink points out in this symposium, two of Batson’s most important starting points in this work are a focus on goal-directed motives rather than on the behavior or consequences of our actions and his division of our motives to act morally into four distinct types. Egoism—motivation with the ultimate goal of increasing our own welfare; altruism—motivation with the ultimate goal of increasing another’s welfare; collectivism—motivation with the ultimate goal of increasing a group’s welfare; and principlism (or what he calls moral integrity)—motivation with the ultimate goal of promoting some moral standard, principle, or ideal (Batson 2016, 29). The greatest difference between Batson’s classifications of motives is that the fourth type, what he calls the “truly moral” type of motivation or principlism, has the ultimate goal of being in accordance with principles of right or wrong conduct, while the others—all of which are nonmoral—do not (Batson 2016, 29). Batson’s interdisciplinary analysis of our moral failures reveals that though we seem to value certain moral principles, ideals, and standards, our actions often belie and contradict these abstract mental projections of ours. This moral hypocrisy occurs of a number of reasons, e.g., our personal inability to acquire and apply moral standards, situational pressures, emotional and motivational limitations, etc. Also, we often also use morality to control other’s behavior, to avoid punishment, or a gain a reward.

Batson states that one of his aims is to determine what moral principles are and how they function, rather than determining “which moral standards are right” (Batson 2016, 24). This is, as he mentions, a descriptive rather than a prescriptive (or normative) account of morality. This is a crucial starting point for any social scientist studying morality. Accomplishing this, however, is a very difficult task as Flanagan-Feddon has noted in this symposium. For it seems that if one is to determine what is wrong with morality, he/she must adopt a moral standard, in the first place (i.e., a means of judgment or estimation of right or wrong action), in order to investigate the nature and functionality of moral principles, standards, and ideals, in the second place. Consequently, determining how one ought to think about morality, what one ought to include when analyzing it, and determining what, if anything, is wrong with it are all inescapably selective and prescriptive accounts about moral action. (Moreover, the entire last chapter of this work is a normative—prescriptive—account of morality.) Despite this unavoidable situation, Batson seems to favor principlism over the three others he mentions (i.e., egoism, altruism, collectivism), for it represents what he interprets to be the most common understanding of morality. The ultimate goal of principlism, he claims, is to be moral, that is, “to promote some moral standard, principle, or ideal (e.g., fairness, justice, greater good, do no harm, honesty)” (Batson 2016, 31); thus, “principlism has a more reliable relation to moral action than do egoism, altruism, and collectivism” (Batson 2016, 217). One of the important threads that weaves Batson’s understanding of principlism together is his distinction between ultimate goals, instrumental goals, and unintended consequences. Ultimate goals are defined by goal-directed motives and are valued states we seek to obtain or maintain. Instrumental goals act as stepping-stones to engender our ultimate goals. The pursuit of either of these goals sometimes produces effects that are not goals, namely unintended consequences (Batson 2016, 26–27). Elucidating the difference between ultimate and instrumental goals, Batson draws from Gordon Allport’s (1961) distinction between intrinsic and extrinsic values: “Intrinsic values (those valued as ends in themselves) induce ultimate goals, Extrinsic values (those valued as a means to other ends) induce instrumental goals” (Batson 2016, 27, cf. 203–4).

These distinctions to me seem to be very clear as distinctions of thought but not as distinctions of experience. Our conceptions of ultimate goals (or ends) are ambiguous without a coherent and complete grasp of the instrumental goals necessary to achieve ultimate goals within a given contextualized experience; moreover, our ruminations upon the means to actualize a given goal or end often results in a reevaluation and/or modification of the end itself. These experiential truths concerning the process of attempting to realize particular ends, suggests that principlism defined as “the motivation with the ultimate goal of promoting some more standard, principle, or ideal” may not help us to “act morally” because we lack a clear idea of the meaning and significance of the ultimate goal without the relevant and specific instrumental goals necessary to reach the desired goal. It follows from this that intrinsically valued ultimate goals—a concept that is crucial to Batson’s starting points—cannot be determined by instrumental goals; rather, both the so-called ultimate goals (ends) and instrumental goals (means) can only be, as John Dewey (1939) once noted, reciprocally determined. Recognizing that we only realize the existential significance of means/ends processes within the experiences in which we implement them, I sense that the idea something can be valued intrinsically or “in and of itself” is unintelligible. What would it mean to value a particular ideal or principle intrinsically outside of a contextualized experience—the act—of valuing something? I don’t think drawing from Dewey here hurts Batson’s argument; if anything, it contributes to it by confirming that there is something seriously wrong with principlism.

Since means/ends processes are reciprocally determined within the flow of our experiences, I sense that principlism is far too abstract as a blueprint for living life and that it fails us a cogent ethical system. Batson notes the abstract nature of principlism to be one of its greatness weaknesses, but I wonder if what Batson recognizes as a weakness ought to be perceived as something far worse. This may sound a bit monstrous (even immoral) to ask but: what if what’s wrong with morality is the very idea of moral integrity? I use the term “moral integrity” here in the say way Batson does, i.e., we experience moral integrity when we are motivated to act in such a way that we uphold some moral standard, principle, or ideal and promote a moral value (Batson 2016, 3, 94). Moral integrity can be said to be the same thing as moral honesty: when we strive to have our actions become true representations of our motivation to uphold and promote moral principles. In all honesty, however, I can’t find good reasons for promoting behavior motivated by the sole aim to uphold and promote a moral principle independent of a consideration of the context in which the principle is applied instrumentally. For it does not seem to be the case that we “hold” certain principles outside of a contextual understanding of them within lived experiences or that we abide by certain principles without an understanding of the conceivable consequences that might arise if we apply them as instruments.

Moreover, in experience we are often required to abandon one or more moral principles in order to promote other moral principles, whether we have been motivated to promote the abandoned moral principle(s) or not. Experience, then, seems to dictate the value of promoting one principle over another as its meaning becomes contextualized within human action. Principlism, then, seems to offer not only a specious account of moral action but also of experience itself. For instead of helping us to discern the rightness or wrongness of an action within a given contextualized experience, principlism leads us to view experience as a void—a non-relational space into which future experiences must fit and perhaps (if we are lucky?) match up with our present conceptualizations of moral principles. Batson also notes that principlism is prone to rationalization but I would go further in stating principlism always involve some process of rationalization in order to “fit” our abstractions into particularly conceived parameters of a contextualized experience. Since principles are interpreted differently, what is the value in being motivated to promote airy and abstract principles independent of the particular manner in which they function or without a consideration of the contexts of experience within which they are applied? For what purpose ought one to have moral integrity, if by “moral integrity” one means that acting in a way that promotes moral ideals, principles, and standards without taking into consideration the consequences our actions produce?

Following Merwin’s question in this symposium, I think it is important to ask how principles arise in our experience. One legitimate answer is that we adopt our principles for moral action based on our upbringing and the reinforcement of our cultural norms and mores. This is true about principles that are already commonly accepted within a given society. But all principles arise through either recognition of the consequences of action or reflection upon the conceivable consequences of future actions. Whether principlism ought to be desired, then, seems to depend entirely upon how we use principles, ideals, and standards as instruments and what, if any, consequences arise from our instrumental use of them. In light of the above reflections and questions, I wonder why Batson wishes to save principlism (Batson 2016, 200–28), if he does at all, considering that this work is largely an exposé on how dysfunctional principlism is. At the end of this groundbreaking investigation, Batson suggests that if people want to live up to the moral standards, principles, and ideas they expose, then the “best treatment” would be one that leads them to internalize and intrinsically value their principles (Batson 2016, 224). Having already commented on the relative incoherence of the idea of intrinsic values, I don’t understand what it means to internalize principles independent of a given context of experience or irrespective of the felt consequences of actions. What does it mean to internalize a principle, and how could we empirically verify when a principle has been internalized? Even if we could verify the process of internalization, there is no reason to think that the internalization of principles is itself good. Revolutionary moral insights have sometimes arisen through the destruction of commonly held moral ideals or principles rather than their promotion.

Maybe it was for these reasons that William James (1909) held to the belief that the moral life cannot be guided by a fixed ethical system—replete with rigid moral principles—made up in advance of experience. Maybe it is time to think differently about morality. Rather than thinking that we should have moral integrity, i.e., that we should act to uphold and promote moral principles and values—irrespective of the context and conceivable consequences of our actions—perhaps it is best to think about how principles operate as instruments to bring about the cultural conditions necessary for democratically inclined moral discourses in light of different assessments of “right” or “wrong” conduct.

 

Works Cited

Allport, Gordon W. 1961. Pattern and Growth in Personality. New York: Holt, Rinehart & Winston.

Batson, C. Daniel. 2009. “These Things Called Empathy: Eight Related but Distinct Phenomena.” In The Social Neuroscience of Empathy, edited by Jean Decety and William Ickes. Cambridge: MIT Press.

———. 2014. “Empathy-Induced Altruism and Morality: No Necessary Connection.” In Empathy and Morality, edited by Heidi L. Maibom. New York: Oxford University Press.

———. 2016. What’s Wrong With Morality? A Social-Psychological Perspective. New York: Oxford University Press.

Dewey, John. 1939. Theory of Valuation. Chicago: University of Chicago Press.

James, William. 1909. “The Moral Philosopher and the Moral Life.” In The Writings of William James: A Comprehensive Edition. Chicago: University of Chicago Press.

  • C. Daniel Batson

    C. Daniel Batson

    Reply

    Reply to Comment by Mark Fagiano

    Mark Fagiano, like Christy Flanagan-Feddon, challenges my claim to be pursuing a descriptive rather than a normative analysis. He reasons, as did she, that “if one is to determine what is wrong with morality, he/she must adopt a moral standard in the first place, i.e., a means of judgment or estimation of right or wrong action, in order to investigate the nature and functionality of moral principles, standards, and ideals in the second place.”

    I think this reasoning involves a logical slip—it confuses two different kinds of wrong. The “wrong” to which I refer in the title and throughout What’s Wrong with Morality? isn’t whether a given principle (standard, ideal) is wrong but whether our morality, whatever its content, fails to lead us to act as it directs. Mark and Christy are right that it’s normative to endorse some moral principle, standard, or ideal. But I think it’s descriptive to study the effect of this endorsement on conduct. Imagine a researcher who studies whether people who endorse “thou shalt not commit adultery” avoid extramarital sex. To study this effect, the researcher need not take a stand on the rightness or wrongness of adultery. Regardless of his or her own endorsement, if those who endorse this principle fail to avoid adultery, something’s wrong. But what’s wrong isn’t normative, it’s functional (as in, “What’s wrong with the TV—there’s no picture?”).

    That said, I still face a challenge. To allow each individual to specify his or her principles for right conduct in the situation in question, as I do, creates a potential problem for me as a researcher. In my experiments, I compare the behavior of individuals who confront a moral situation under certain carefully selected conditions with the behavior of individuals who confront the same situation under other carefully selected conditions (see my reply to Lidewij Niezink). But, if different individuals endorse different principles in a situation (think, for example, of the situation in which research participants assign themselves and another person to differentially desirable tasks—chapter 4), how will I know whether what they do is or isn’t consistent with their self-chosen principles?

    The strategy that colleagues and I have used to deal with this problem is to place participants in a situation in which they all (or nearly all) agree on the morally right thing to do. In the task-assignment situation, nearly all of our undergraduate participants said that what’s morally right is to give both people an equal chance to be assigned the more desirable task. Given this, we could make a valid comparison to an equal chance (50 percent on the coin flip) while at the same time allowing for individualized and contextualized standards.

    Mark also has several concerns about what I call principlism, which I defined as “motivation with the ultimate goal of promoting some moral standard, principle, or ideal” (29). I claimed that this motivation is “aroused by threats to or violations of intrinsically valued moral principles” (35) and should be distinguished from instrumental moral motivation—“motivation to act morally as a means to another end,” such as to avoid guilt or gain esteem (30).

    First, Mark thinks that both the distinction between intrinsic and extrinsic values (the former being valued as ends in themselves, the latter as means to other ends) and the distinction between ultimate and instrumental goals are difficult to make without attention to context. I quite agree. That’s why I emphasized that values can change depending on circumstances and that ultimate goals are “the state or states a person is seeking at a given time” (27). (To be more explicit, I probably should have added “. . . and in a given situation.”) I also pointed out that some values are relatively stable across a range of contexts, such as the value of air to breathe, whereas others are more context dependent, such as the value of wearing a warm coat (26). Moreover, a given goal may be ultimate in one context, instrumental in another, and both in a third. So Mark and I agree about his first concern.

    Second, Mark invokes Dewey to claim that ultimate goals and instrumental goals can only be reciprocally determined. Here, I must disagree—specifically, with the “only.” As I sketched in my reply to Lidewij, I think we can and often do infer whether a given goal is ultimate or instrumental in a particular situation by looking at how behavior changes (or doesn’t change) across different conditions selected to be diagnostic.

    Third, turning specifically to moral motivation, let me repeat something I said in my reply to Christie. Moral principles, standards, and ideals, as I use these terms, include guides to conduct that are general as well as those that are context specific, guides that are abstract as well as concrete, unconscious as well as conscious, irrational as well as rational (for examples, see p. 23). Further, most of us have a whole host of moral standards and principles, and this multiplicity can produce moral conflict and the overriding of one principle in order to uphold another (again, for examples, see p. 23). And, as Mark notes, when judging whether our conduct is right and wrong, we typically include an assessment of its consequences—or at least its intended consequences. On all this Mark and I seem to agree. But, in contrast to my broad conception of principles, Mark wants to use the term “principles” to refer only to “rigid moral principles—made up in advance of experience.” Yet, even here, our difference seems more semantic than substantive. Unless I misunderstand, we both think that context can affect what principles (my broad definition), if any, are considered relevant and what behavior, if any, accords with relevant principles.

    Finally, Mark asks a two-part empirical question: “What does it mean to internalize a principle, and how could we empirically verify when a principle has been internalized?” I attempted to address the first part of this question in chapter 2 (45–53), focusing on the distinction made by Ed Deci and colleagues between introjection and integration. Principles internalized to the level of introjection are valued extrinsically as instrumental means to egoistic ultimate goals, such as seeing ourselves and being seen by others as good. Principles internalized to the level of integration are valued intrinsically as ends in themselves. I suggested that the link between principle (my definition, not Mark’s) and action, which is what I mean by moral integrity, is more tenuous for the former than the latter.

    The research on moral motivation described in chapter 4 is, I hope, an initial step toward answering the second part of Mark’s empirical question. By comparing the percentage of participants acting in accord with their espoused principle of fairness under conditions in which participants either do or don’t have wiggle room that allows them to appear fair without actually being fair, we can assess whether they are motivated by moral integrity or moral hypocrisy. If motivated by moral integrity (principlism), wiggle room shouldn’t reduce their fair behavior. If motivated by moral hypocrisy, it should. The evidence so far suggests that moral integrity is less common than often assumed. If this is what Mark means when he muses, “What if what’s wrong with morality is the very idea of moral integrity?” the data to date support his musing. Personally, I think that moral integrity exists, even if not as prevalent as often thought. Of course, I could be wrong. We need more data.

    In the end, Mark takes a different tack. Turning from rigid moral principles adopted in advance of experience, he suggests a principle (my definition, not his) for right conduct when facing moral disagreements: “Bring about the cultural conditions necessary for democratically inclined moral discourse in light of different assessments of ‘right’ and ‘wrong’ conduct.” He seems to believe that this principle lies outside the scope of my analysis of what’s wrong with morality.

    I don’t think it does. As with any other principle, I immediately want to know if holding this democratic-discourse principle leads us to bring about the specified cultural conditions and, if it does, why? If holding this principle doesn’t lead us to bring about these conditions, we have another example of moral failure. If, on the other hand, holding it leads us to bring about the conditions, and to do so for the sake of democratic discourse, we have moral integrity. But if, instead, it leads us to bring about the conditions as a means to some other end (such as to feel good about ourselves, or to gain esteem, or to persuade others to toe our specified line), we have instrumental moral motivation. I can’t claim to know which, if either, type of motivation this particular principle is likely to produce—although I have a hunch. Of course, I could be wrong. More data?

Shares