If you haven't noticed, for the last few days this blog has basically been dominated by a debate about metaethics, the likes of which have never been seen before around these parts. It's been pretty fascinating, and I've learned a whole lot in the process; I definitely recommend checking it out -- I've labeled all the posts as "The Morality Debate." This post is a continuation of that discussion, and probably won't make sense if you haven't been following along. (Also, I should point out that this is my second posting today; the first is here)
An alternative perspective to my fictionalist view was offered in the comments section of the initial post in this discussion by a fellow named Stan. In this post, I will sketch out the idea suggested by Stan's comment, identify the core difference between that system and the one that I have been discussing, and then explain why I think that my view still makes sense even granting Stan's central point.
If I understood him correctly, Stan suggested that rather than fictionalistically adopting a literally false humanistic attitude, we should acknowledge that cooperativeness and trustworthiness are actually more effective strategic attitudes for securing one's own well-being than is an attitude of short-sighted maximization. This would help us to avoid the pitfalls of a vulgar nihilistic attitude so that we didn't end up ruining a whole lot of everything on account of our movement away from a moral attitude. In his book, Elements of Justice, David Schmidtz made an observation that seems clearly relevant here (171):
Unconstrained maximizers, by definition, optimally use any resources to which they have access, including their neighbors’ organs. To get good results in the real world, though, we need to be surrounded not by unconstrained maximizers but by people who respect rights, thereby enabling us to have a system of expectations and trust, which allows us together to transform our world into a world with greater potential (a world where delivery companies are willing to serve the hospital [because they don’t fear that their delivery men will be killed in order to harvest their organs for needy patients]). When we cannot count on others to treat us as rights-bearers with separate lives, we are living in a world of lesser potential.
Schmidtz continued (173):
When doctors embrace a prohibition against harvesting organs of healthy patients without consent, doctors give up opportunities to optimize – to hit the ceiling [of possible utility outcomes] – but patients gain opportunities to visit doctors safely. They gain a world with a higher ceiling. Such utility comes from doctors refusing even to ask whether murdering a patient would be optimal.
I think that this touches on something very important, and which Stan was focusing on: we can actually get better results if we adopt strategies that lead us away from approaching every situation with the goal of getting the best possible results. The momentary success of the "rational maximizer," in this view, is a Pyrrhic victory: by maximizing in the single instance, the agent destroys his opportunity for the greatest possible overall outcome. Along these lines, David Gauthier even went so far in his book, Morals by Agreement, as to try to establish a theory of morality based on this sort of thinking about rationality.
And indeed, this approach to game-theoretic thinking seems on its surface to rationalize a lot of our moral norms: we see that the anti-social, predatory behavior forbidden by our morals is not just "depraved" and "vile," but also irrational in a sense. And this makes us feel good about our norms -- we have good reason to uphold them independently of our moral views about the behaviors they prescribe. In seeking to understand why the moral nihilist will not go around stealing, raping, and killing people, one needs only to appreciate the force of this game-theoretic point: even if those actions might be best in a short-term sense, they would likely be disastrous to one's overall prospects for fulfillment and well-being. The sophisticated nihilist will point out that morality is inherently problematic, but the prescriptions of morality are quite often perfectly sensible, and so we shouldn't expect that things would change very drastically if we went along with a sufficiently nuanced nihilistic position instead of a distinctively moral view.
What is the difference, then, between the enlightened, egoistic moral nihilist and the moralistic humanist? In his essay, "Deception and Reasons to be Moral," Geoffrey Sayre-McCord notes (114):
People may have dispositions that give rise to moral behavior without being moral people. They might, for instance, be so carefully watched that temptation always gave way to fear of detection and punishment. We could certainly expect such people to behave morally; but they would be behaving morally by default, and not because they are moral. What sets the moral apart from the enlightened egoists is (at least in part) their willingness to act on considerations other than those of self-interest; unlike enlightened egoists, those who are moral constrain their pursuit of personal benefits on moral grounds.
To Sayre-McCord's fear of detection, we could add the value of future interactions, the personal pleasure received from the approval and trust of others, the sense of satisfaction that comes with having a "virtuous" character, and the personal displeasure that one might experience as a result of sensitivity to the harms inflicted on others by one's own actions. Each of these factors would be perfectly accessible to the moral nihilist, and provide reasons in themselves to act in much the same manner that morality would prescribe. But as Sayre-McCord points out, these are not moral reasons. Amartya Sen provides some insight into the shape of this distinction in his essay, "Rational Fools: A Critique of the Behavioral Foundations of Economic Theory" (326):
...we must distinguish between two separate concepts: (i) sympathy and (ii) commitment. The former corresponds to the case in which the concern for others directly affects one's own welfare. If the knowledge of torture of others makes you sick, it is a case of sympathy; if it does not make you feel personally worse off, but you think it is wrong and you are ready to do something to stop it, it is a case of commitment. I do not wish to claim that the words chosen have any great merit, but the distinction is, I think, important. It can be argued that behavior based on sympathy is in an important sense egoistic, for one is oneself pleased at others' pleasure and pained at others' pain, and the pursuit of one's own utility may thus be helped by sympathetic action. It is action based on commitment rather than sympathy which would be non-egoistic in this sense.
It is, I take it, when people act out of commitment that they act in a manner that is distinctly moral. And as should be clear to anyone following this conversation, non-egoistic commitment is only comprehensible through a mindset which projects evaluative attitudes onto objects -- declaring them "intrinsically valuable" instead of merely "personally valued" -- and then disciplines one's own behavior to take account of those objects as a matter of normative integrity.
The coherentist moral nihilist points out that these projections are worrisome and deceiving: there are no "intrinsically valuable" objects, and the idea that you should be non-egoistically "committed" to something that you value for reasons other than its reflection of your own personal desires is accordingly incomprehensible. A properly coherent view would refrain from projecting and seek to understand one's values as being matters of personal desire instead of some kind of mysterious "response to intrinsic value."
And within this mindset, we need to take Stan's point very seriously. It's not that our projections are completely baseless; there are a whole host of reasons we might offer for doing many or all of the same things as a nihilist that we might have done out of commitment while in a moralistic mindset. We could point out that we are drawn to virtue for aesthetic reasons. We could notice that we are better able to achieve our personal goals when we are cooperative, trustworthy, honest, and sensitive. We could recognize that our personal values and desires do not extend only to things that "make us feel good" or "give us a sense of well-being," but also include the range of values we would like to see pursued, advanced, and instantiated in the world. We could "be the change we want to see in the world" not because we ought to, but because it's the change we want to see in the world.
I get it. And no, it's not wrong.
But for me, I don't want to have to go through the hassle of correcting myself every time I project my values onto the world. I don't want to say, "Hey don't do that; that's wro- I mean, that fails to reflect the choice that would accord my personal values and the values that I think most people share, and I imagine that most people (myself included) would prefer to see it not happen; I suspect that if you came to truly understand the nature of your actions, you would feel the same way." I want to say, "Don't do that; that's wrong," and to have people basically get what I'm saying.
And aside from the simple ease of speaking and thinking this way, I also think it helps to avoid the risk of what I take to be a very serious mistake in reasoning: moving from the idea that something is not intrinsically valuable to the idea that there is no reason to value it. I think it's far too easy to jump from "There's nothing wrong with doing that" to "There's no reason that you shouldn't do that if you feel like it." And the latter view is characteristic of the vulgar, overly simplistic, and inherently flawed nihilistic viewpoint that comes to mind when most lay people think about moral nihilism.
I take it to be a natural feature of the way that my mind works that I project my values onto reality. I also take it to be a natural feature of the way that my mind works that it would require a bunch of extra effort to translate all of my projections into the more accurate subjective-value-statement forms that the coherentist moral nihilist demands. And I also think that if I tried to do this all the time, I could potentially end up confusing myself and doing a much worse job of promoting my own values than I would if I simply continued to project them onto the world and keep in the back of my mind that my projections were generating a paradigm that is literally false, but still basically in line with the truth. And that's what my fictionalist stance does. Hopefully that makes sense! (Yea, like I should really expect that now people will agree...ha!)