Thursday, May 14, 2009

Why Fictionalism?: A Reply to Stan

[This post is part of The Morality Debate]

If you haven't noticed, for the last few days this blog has basically been dominated by a debate about metaethics, the likes of which have never been seen before around these parts. It's been pretty fascinating, and I've learned a whole lot in the process; I definitely recommend checking it out -- I've labeled all the posts as "The Morality Debate." This post is a continuation of that discussion, and probably won't make sense if you haven't been following along. (Also, I should point out that this is my second posting today; the first is here)

An alternative perspective to my fictionalist view was offered in the comments section of the initial post in this discussion by a fellow named Stan. In this post, I will sketch out the idea suggested by Stan's comment, identify the core difference between that system and the one that I have been discussing, and then explain why I think that my view still makes sense even granting Stan's central point.

II

If I understood him correctly, Stan suggested that rather than fictionalistically adopting a literally false humanistic attitude, we should acknowledge that cooperativeness and trustworthiness are actually more effective strategic attitudes for securing one's own well-being than is an attitude of short-sighted maximization. This would help us to avoid the pitfalls of a vulgar nihilistic attitude so that we didn't end up ruining a whole lot of everything on account of our movement away from a moral attitude. In his book, Elements of Justice, David Schmidtz made an observation that seems clearly relevant here (171):
Unconstrained maximizers, by definition, optimally use any resources to which they have access, including their neighbors’ organs. To get good results in the real world, though, we need to be surrounded not by unconstrained maximizers but by people who respect rights, thereby enabling us to have a system of expectations and trust, which allows us together to transform our world into a world with greater potential (a world where delivery companies are willing to serve the hospital [because they don’t fear that their delivery men will be killed in order to harvest their organs for needy patients]). When we cannot count on others to treat us as rights-bearers with separate lives, we are living in a world of lesser potential.

Schmidtz continued (173):
When doctors embrace a prohibition against harvesting organs of healthy patients without consent, doctors give up opportunities to optimize – to hit the ceiling [of possible utility outcomes] – but patients gain opportunities to visit doctors safely. They gain a world with a higher ceiling. Such utility comes from doctors refusing even to ask whether murdering a patient would be optimal.

I think that this touches on something very important, and which Stan was focusing on: we can actually get better results if we adopt strategies that lead us away from approaching every situation with the goal of getting the best possible results. The momentary success of the "rational maximizer," in this view, is a Pyrrhic victory: by maximizing in the single instance, the agent destroys his opportunity for the greatest possible overall outcome. Along these lines, David Gauthier even went so far in his book, Morals by Agreement, as to try to establish a theory of morality based on this sort of thinking about rationality.

And indeed, this approach to game-theoretic thinking seems on its surface to rationalize a lot of our moral norms: we see that the anti-social, predatory behavior forbidden by our morals is not just "depraved" and "vile," but also irrational in a sense. And this makes us feel good about our norms -- we have good reason to uphold them independently of our moral views about the behaviors they prescribe. In seeking to understand why the moral nihilist will not go around stealing, raping, and killing people, one needs only to appreciate the force of this game-theoretic point: even if those actions might be best in a short-term sense, they would likely be disastrous to one's overall prospects for fulfillment and well-being. The sophisticated nihilist will point out that morality is inherently problematic, but the prescriptions of morality are quite often perfectly sensible, and so we shouldn't expect that things would change very drastically if we went along with a sufficiently nuanced nihilistic position instead of a distinctively moral view.

III

What is the difference, then, between the enlightened, egoistic moral nihilist and the moralistic humanist? In his essay, "Deception and Reasons to be Moral," Geoffrey Sayre-McCord notes (114):
People may have dispositions that give rise to moral behavior without being moral people. They might, for instance, be so carefully watched that temptation always gave way to fear of detection and punishment. We could certainly expect such people to behave morally; but they would be behaving morally by default, and not because they are moral. What sets the moral apart from the enlightened egoists is (at least in part) their willingness to act on considerations other than those of self-interest; unlike enlightened egoists, those who are moral constrain their pursuit of personal benefits on moral grounds.

To Sayre-McCord's fear of detection, we could add the value of future interactions, the personal pleasure received from the approval and trust of others, the sense of satisfaction that comes with having a "virtuous" character, and the personal displeasure that one might experience as a result of sensitivity to the harms inflicted on others by one's own actions. Each of these factors would be perfectly accessible to the moral nihilist, and provide reasons in themselves to act in much the same manner that morality would prescribe. But as Sayre-McCord points out, these are not moral reasons. Amartya Sen provides some insight into the shape of this distinction in his essay, "Rational Fools: A Critique of the Behavioral Foundations of Economic Theory" (326):
...we must distinguish between two separate concepts: (i) sympathy and (ii) commitment. The former corresponds to the case in which the concern for others directly affects one's own welfare. If the knowledge of torture of others makes you sick, it is a case of sympathy; if it does not make you feel personally worse off, but you think it is wrong and you are ready to do something to stop it, it is a case of commitment. I do not wish to claim that the words chosen have any great merit, but the distinction is, I think, important. It can be argued that behavior based on sympathy is in an important sense egoistic, for one is oneself pleased at others' pleasure and pained at others' pain, and the pursuit of one's own utility may thus be helped by sympathetic action. It is action based on commitment rather than sympathy which would be non-egoistic in this sense.

It is, I take it, when people act out of commitment that they act in a manner that is distinctly moral. And as should be clear to anyone following this conversation, non-egoistic commitment is only comprehensible through a mindset which projects evaluative attitudes onto objects -- declaring them "intrinsically valuable" instead of merely "personally valued" -- and then disciplines one's own behavior to take account of those objects as a matter of normative integrity.

IV

The coherentist moral nihilist points out that these projections are worrisome and deceiving: there are no "intrinsically valuable" objects, and the idea that you should be non-egoistically "committed" to something that you value for reasons other than its reflection of your own personal desires is accordingly incomprehensible. A properly coherent view would refrain from projecting and seek to understand one's values as being matters of personal desire instead of some kind of mysterious "response to intrinsic value."

And within this mindset, we need to take Stan's point very seriously. It's not that our projections are completely baseless; there are a whole host of reasons we might offer for doing many or all of the same things as a nihilist that we might have done out of commitment while in a moralistic mindset. We could point out that we are drawn to virtue for aesthetic reasons. We could notice that we are better able to achieve our personal goals when we are cooperative, trustworthy, honest, and sensitive. We could recognize that our personal values and desires do not extend only to things that "make us feel good" or "give us a sense of well-being," but also include the range of values we would like to see pursued, advanced, and instantiated in the world. We could "be the change we want to see in the world" not because we ought to, but because it's the change we want to see in the world.

V

I get it. And no, it's not wrong.

But for me, I don't want to have to go through the hassle of correcting myself every time I project my values onto the world. I don't want to say, "Hey don't do that; that's wro- I mean, that fails to reflect the choice that would accord my personal values and the values that I think most people share, and I imagine that most people (myself included) would prefer to see it not happen; I suspect that if you came to truly understand the nature of your actions, you would feel the same way." I want to say, "Don't do that; that's wrong," and to have people basically get what I'm saying.

And aside from the simple ease of speaking and thinking this way, I also think it helps to avoid the risk of what I take to be a very serious mistake in reasoning: moving from the idea that something is not intrinsically valuable to the idea that there is no reason to value it. I think it's far too easy to jump from "There's nothing wrong with doing that" to "There's no reason that you shouldn't do that if you feel like it." And the latter view is characteristic of the vulgar, overly simplistic, and inherently flawed nihilistic viewpoint that comes to mind when most lay people think about moral nihilism.

I take it to be a natural feature of the way that my mind works that I project my values onto reality. I also take it to be a natural feature of the way that my mind works that it would require a bunch of extra effort to translate all of my projections into the more accurate subjective-value-statement forms that the coherentist moral nihilist demands. And I also think that if I tried to do this all the time, I could potentially end up confusing myself and doing a much worse job of promoting my own values than I would if I simply continued to project them onto the world and keep in the back of my mind that my projections were generating a paradigm that is literally false, but still basically in line with the truth. And that's what my fictionalist stance does. Hopefully that makes sense! (Yea, like I should really expect that now people will agree...ha!)

11 comments:

Stan said...

The argument you make in this post is basically one vital half of the full perspective I thought I was describing, however there is an important twist that I didn't explain well. You've latched onto it, so let me see if I can better explain my reasoning by going back.

Before, I said humanism for system 1 (perceptions and intuitions), moral skepticism for system 2 (reasoning). Again, it is a simplification, but I think it's "good enough" to frame this short explanation.

To visit your argument briefly:
But for me, I don't want to have to go through the hassle of correcting myself every time I project my values onto the world.Sure you don't. It's not just that you don't want to. You can't. You couldn't function if you took that approach, nor could anyone else. You need system 1.

As we go about our daily business, most of us can intuitively agree on most basic moral standards, for example, not kicking babies. It's fast, easy, and generally beneficial.

However there are more complex situations to deal with, ranging from long-reaching policies to hot button subjects like abortion. These are places where people's "moral system 1" are not intuitively agreeing, and conflict erupts.

At these spots of contention we really need system 2 for resolutions--it's flexible, neutral, and reasonable. System 2 takes a lot of effort and time, though, and system 1 is always there filling in perceptions and intuitions. Sorting through these contentions isn't easy, but hey, at least we have a system 2 to use.

The system 1 and system 2 situation isn't so much a balancing act, as a series of gambles, with limited resources, in real-time. Any strategy (planned or not) is necessarily enabled and constrained by that reality.

You brought up some other excellent points, although I have a similar-but-flipped take on some of them, which I don't have time to add to right now. Thanks very much for the discussion. You too, Vichy, who I assume is reading this.

Danny said...

Thanks for the follow-up, Stan. It sounds like we more or less agree, though I have two reservations about the framework you've set up here:

First, if there are basic patterns to our moral values, then I think you might be overly restricting the scope of humanism when limiting it to matters of perception and intuition (your "system 1"). If you look at good humanists' work in normative ethics and political philosophy, they reason within humanism and simply limit themselves to premises that they take to be relatively uncontroversial (e.g., individuals are intrinsically valuable and it does not account properly for their separateness and moral significance to treat them as mere tools for the achievement of others' ends without their consent). If someone disagrees with the premises (which, as fictionalistic statements, represent projections of values that might not be shared), then they likely won't follow along to the conclusion. But I think we can get pretty far on the idea that most people have many of the same kinds of moral intuitions. I'm not sure that you disagree, but as stated, your argument would seem to rule out this sort of thing.

Second, I'm not sure why you would want to have a distinction between two "systems" at all. It seems like contemporary pluralistic varieties of liberalism are already built on the idea that people don't always agree on moral issues, and we need to take this into account when thinking about interpersonal and social relationships. Further, most plausible conceptions of morality and justice (within the humanistic mindset) are perfectly comfortable with the idea that consequences can be morally significant, and that we need to take these into account when attempting to apply our moral reasoning. So I'm not sure what is the big hazard of applying humanism universally, so long as it is properly understood.

Ultimately, I'm just not seeing what the big deal is with adopting a moral fiction to deal with moral issues the way that we normally do. That is, identify what you think matters, attempt to weigh the different consideration while acknowledging the reasonable range of disagreements, and offer a discussion of how people might think about arriving at an answer. It's strictly true that this is an inexact way to talk about these issues, but I really don't see the practical relevance of moving outside the fiction except as necessary to avoid becoming deluded by our own projections.

Stan said...

Ultimately, I'm just not seeing what the big deal is with adopting a moral fiction to deal with moral issues the way that we normally do. That is, identify what you think matters, attempt to weigh the different consideration while acknowledging the reasonable range of disagreements, and offer a discussion of how people might think about arriving at an answer. It's strictly true that this is an inexact way to talk about these issues, but I really don't see the practical relevance of moving outside the fiction except as necessary to avoid becoming deluded by our own projections.Before I craft a reply to the whole enchilada, please look carefully and critically at your quote, above.

What is your assessment of it, now that a little time has passed?

Danny said...

Hmm...I think I might possibly want to change the phrase "the practical relevance of moving" to "the practical argument in favor of moving," since I don't want to suggest that there would be no practically relevant consequences of such a move. But aside from that, I think I stick to my guns; I'm not sure I see the problem you might be seeing.

Stan said...

The practical relevance is the practical argument, and the root of both is simple: science.

The scientific method exists because to understand the world better we must avoid--at least, limit--becoming deluded by our own projections. That I'm explaining this gives me pause...

What's bad about abandoning--and even stomping--your guns? What's good about sticking to them?

Danny said...

Stan, saying that something is consistent with the scientific method is not a practical argument. Because you haven't actually responded to my argument except to say that it's incompatible with your position, I can't help but point back at the argument I've already offered and ask you what part of it you think is wrong and why.

Stan said...

You've built an "except" as a sort of escape hatch.

Possibility 1 is you see how huge your "except" is, but you don't want to grant it for whatever reason, like having to abandon your guns.

Possibility 2 is that you don't see how huge that "except" is, which leads me to believe you don't see how many socioeconomic issues in real life require resolving conflict in areas where misleading intuitions are a hindrance.

Something like "I don't see what the big deal is with being more realistic about X," only makes me scratch my head a little. Someone that is not familiar with the importance of the scientific method would probably need a career-related example to understand.

However, if someone says they don't see the practical relevance of a more realistic view of the world, except as necessary to improve the quality of our decisions... what is the rational response to that? You've admitted you see the relevance, but you're shrugging it off.

I don't usually argue the practical relevance of science to human progress with humanists. I figured a reminder would do.

Danny said...

I'm not sure what you mean by "an 'except'"...

I've offered a number of reasons why I think it's reasonable to work with the fiction, including: 1) It is natural for us to project our evaluative attitudes, and trying not to do so would likely result in confusion about those attitudes as well as involving considerable effort; 2) Projecting our attitudes allows us to maintain an image of ourselves and others (that is, as intrinsically valuable individuals) that is conducive to our personal wellbeing; 3) Because most people have similar evaluative attitudes towards the sorts of things with which moral questions are concerned, and because most people project those attitudes onto reality, fictionalistic moral language ends up working just fine in the place of more accurate statements about each individual's evaluative attitudes; 4) Because treating people as if their interests are important is often a good strategy from the standpoint of maximizing the fulfillment of one's own interests, it can be a useful shorthand to simply think about things as if other people's interests really are important. So far, the only counterargument I've seen you offer is basically, "But that manner of speaking is linguistically imprecise and runs the risk of generating false conclusions if people fail to comprehend to nature of what they are saying." I, however, do comprehend the nature of what I'm saying, and I don't care if I'm being linguistically imprecise if doing so gets me all of the benefits enumerated above. So, then, I see no good reason to reject my position.

Stan said...

The "except" is as follows:

It's strictly true that this is an inexact way to talk about these issues, but I really don't see the practical relevance of moving outside the fiction except as necessary to avoid becoming deluded by our own projections.If you read my original response in this comments thread again, you will see that I not only think using intuition is reasonable, but also necessary. Often in these replies, you seem to be changing my argument. Your current description of my counter-argument is completely bogus. I'm not saying that as a refutation, but as a sign of a serious communication problem in this discussion.

I've already either critiqued or agreed with the reasons why you think it's reasonable to work with the fiction.

To go about it a different way, there are many modern and very relevant socioeconomic issues in the world today that people want to resolve, improve, or at the very least, understand. At the root of many of the issues are beliefs about fuzzy* concepts such as free markets or property or government. At the root of many those beliefs are moral intuitions thick with cognitive biases.

*Fuzzy in part because the term can mean the technical, political, or "actually-practiced" definition. This requires reasoning and moral skepticism to sort through.

In other words, to go from descriptive economics to normative economics, we need to see past intuitive shoulds. We should want to see past intuitive shoulds; presumably, we want to know how political economic reality works so we can see how it might work better. You acknowledge this only with a brief "except," as if the practical relevance is trivial or near-trivial. It's not at all. I could show you hundreds of practical examples, but the literature I've cited contains plenty already. It's even OvercomingBias.com' raison d'ĂȘtre.

Mapping the terrain of reality, and how humans really behave, IS real humanism: truth and morality are pursued and defined through science. That pursuit must be science because otherwise only the validity of arguments is tested--not the soundness.

Since you are now reading Rawls, I hope that you will follow the work by reading the critiques of Arrow and Harsanyi, which should lead you to Bayesian games in the context of social dilemmas, which should hopefully lead you to Elinor Ostrom, and possibly even the work on trust and reciprocity which I mentioned.

Stan said...

I thought about this a bit when playing in the dirt today, and thought maybe my position would be clarified by showing how it agrees with, modifies, or otherwise considers the points you helpfully summarized.
1) It is natural for us to project our evaluative attitudes, and trying not to do so would likely result in confusion about those attitudes as well as involving considerable effort;

Yes, but that only means we can't do it all the time, and should be choosy about when we do it. Something taking considerable effort or causing confusion has never stopped science from progressing, and I don't see why we'd be smart to make an exception here.

Yes, withholding judgment and considering cooperative gains can take considerable effort, but it can also resolve socioeconomic conflicts more fairly--and potentially with the biggest gains for all--than any other means. Not a sure thing, but the possible gains of science-based, and thus moral skepticism-based, cooperative heuristics resolutions are practically relevant and non-trivial. Another progressive benefit: the resolution process may reveal, redefine, or create more common ground that can subtly revise the morals of the parties, aligning them just a bit more closely. The alternative is a compromise in which parties feel they are settling for something that does not align with their morals, significantly increasing the likelihood of parties reneging where they can most easily get away with it.

A further complication: evaluative attitudes are not consistent, and can easily and significantly be influenced by situational variables. You can actually create a policy that encourages bad behavior even though both parties assume it will lower bad behavior. Sometimes intuitions and judgments are simply wrong on both ends. In those cases, peeling back the fiction to see what really influences behavior can result in an unintuitive win-win.

The power of rationalization and the narrative of self shouldn't be underestimated here, either, regarding both confusion and effort. Given a strong enough belief, but valuable enough benefits for going against that belief, we can make an exception to a rule, then make the exception the rule, and then weave a narrative for a morally-consistent self. The inconsistency and hypocrisy may be distasteful, but that's the real world. All the more reason to evolve our moral sense to be more reliable and verifiable--we're fooling ourselves less.

2) Projecting our attitudes allows us to maintain an image of ourselves and others (that is, as intrinsically valuable individuals) that is conducive to our personal wellbeing;

This claim has psychological support, however, as an argument it only goes against moral skepticism if we completely ignore cooperative heuristics. A self-image of Godliness is conducive to personal well-being in believers, too. I would submit the same tactic for both: hit the periphery where skepticism does not consciously challenge the belief and self-image, and even better, hit the periphery where it happens to roughly align with the belief and self-image.

Further, attitudes we project aren't even consistent with a view of intrinsicly valuable individuals. I cited psychic numbing as an example. In other words, to even try to be consistent in your projected attitudes, you'll need some moral skepticism. Our valuations of both people and objects are sensitive to the situation. This leaves intuition shrugging its shoulders: why would you value two starving children less than one? That should bug a humanist, who then begins to reason about why that might be, and what can be done about it. At that point, the humanist has already entered the land of moral skepticism.

Could some projections be evolved to increase collective well-being? Yes. How do we evolve these projections? We use moral skepticism-based social science to create better policies, social agreements, and so on. Once established, it will become more of a default position, incorporated into our narrative.

Stan said...

Pardon my length. Also, the italics preview fine, but the actual posting loses a return line following italics.
3) Because most people have similar evaluative attitudes towards the sorts of things with which moral questions are concerned, and because most people project those attitudes onto reality, fictionalistic moral language ends up working just fine in the place of more accurate statements about each individual's evaluative attitudes;
-
Except when it doesn't work just fine, as shown above, and many times before. This supports a varied approach dependent on time, extent of existing agreement, and potential positive or negative consequences. A varied approach has been my suggestion from the beginning.

For another example, take Enron. We want to understand what happened with Enron so that it won't happen again. We can find a lot of different intuitions about the morality of various people involved, but what is our best bet for really preventing future dishonesty of that scale? We're unlikely to find concensus in everyone's intuitions, and even if we do--say, everyone agrees that the problem was Enron-ers being more dishonest and greedy than an average person--we might be missing something. When Dan Ariely started looking at the aftermath of Enron, he found that the evidence didn't seem to support the "more dishonest and greedy" intuition. In a series of experiments, he found he could use situational variables to strongly influence people's ethical decisions. That's paydirt, not for just stopping future Enrons, but for increasing honesty in everyday transactions across society.

The Enron example displays another advantage of moral skepticism in science: you may find collective gains that cannot be perceived by the intuitive fiction. Is this so different from stepping outside Newtonian physics to look at nanoscale interactions and quantum effects? I don't think so. The technological benefit is obvious. With morality, I'm hoping the socioeconomic benefit is becoming more obvious.
4) Because treating people as if their interests are important is often a good strategy from the standpoint of maximizing the fulfillment of one's own interests, it can be a useful shorthand to simply think about things as if other people's interests really are important.
-
As is the case with science-based cooperative heuristics. Assuming other people's interests are important is a useful heuristic. As we have the time and interest, though, we might find more hidden collective gains by studying where a person's narrative about their interests deviates from their behavior. Thaler and Sunstein's book Nudge explores this a bit, but there's plenty of other psychological literature that comes to mind.

Philosophy Blogs - BlogCatalog Blog Directory Libertarian Blogs Add to Technorati Favorites Back to the Drawing Board - Blogged
"Rational philosophy is on the march. It will f--- up all of your sh-- and leave you without any teeth."