Cookies

The bad news is that it’s Tax Day here in the U.S., and the income tax is immoral, and filling out the forms manually is ridiculously painful.

The good news is that Subway is giving away free cookies!

A few years ago, my friend Haym Hirsh quoted me in his .plan file as saying:

“I think making us go through the hell of filing, is like making prisoners dig their own graves.”

Since then, I’ve been using TurboTax for the Web and it’s been a lot less painful (but still immoral).

Anyway, enjoy a cookie if you get a chance.

Update: Check out the comments in which Sasha Volokh craftily draws me into a discussion about something other than cookies.

46 thoughts on “Cookies

  1. Could you go a little more deeply into what you mean when you say the income tax is immoral? I like the general thought, but I’m not sure about the specific thought.

    Like

  2. Ok.

    First of all, the income tax, like other taxes, is immoral because it is theft. It’s an involuntary seizure of assets. It is NOT payment for services rendered (because payment and services are independent), and having the opportunity to vote doesn’t make it voluntary.

    The income tax is particularly bad because its enforcement requires extremely intrusive monitoring and disclosure of private economic activity. It’s also a tool to manipulate people’s behavior by giving incentives to favored activities.

    I oppose all taxes. But, if some are temporarily necessary to fund essential, legitimate, government services (defense, courts, etc.) then I’d very much prefer a low consumption tax that didn’t require the government to know details about how citizens earn and save and spend their money.

    Like

  3. You’re trying to occupy a middle ground that I don’t think is tenable.

    I could understand just saying “All taxes are immoral” for the reasons you gave, but I don’t think you’re an anarcho-capitalist.

    Now if you’re saying “All taxes are immoral, but some might be necessary in some circumstances” (which you are, above), I think it’s more proper to say all taxes aren’t immoral, even though it’s distasteful to us to have to say so. “Necessary evil” is a colorful figure of speech, but it’s not technically true: things that are necessary can’t be evil. Of course “necessary” is just also a metaphor, a shorthand for “less immoral than the available alternatives”; and then, too, it’s not evil. Similarly, “the lesser of two evils” is a sloppy phrase: the best alternative can’t be evil unless you can identify a third alternative which is good. Anyway, you’ve granted that some taxes may be “temporarily necessary to fund essential services,” so I’m not allowing you to say all taxation is immoral (“I oppose all taxes”).

    (I get around this by being a consequentialist: any action which would *otherwise* be immoral, like killing someone or taking property, is justified if it’s necessary to prevent some other, larger immorality. This requires having a moral calculus so you can compare degrees of immorality. Then, you get to say that taxation to fund the military or police can be justified if the police protects “more rights” than the rights violated by the taxation. All taxation to fund non-rights-protecting activity gets knocked out here (including subsidizing favored activities), and so does excessive taxation even for the good stuff, since you’re comparing the current policy against any other feasible policy. So this view mandates the most “efficient” way of achieving a given level of rights protection, and it also mandates the most “efficient” level of rights protection. But in any case, I can’t say under this view that taxation generally is immoral because there are no absolute, unabridgeable rights under this view; though of course I could still make broad generalizations about the morality of what government does using words like “most.”)

    Then there’s still the question of the level of taxes (high or low) and the form of taxes (income, consumption, etc.). Of course we tend to think low taxes are better. (Under the view I sketched above, this isn’t necessarily true — we may be underfunding some rights protection — but I guess it probably ends up being true.) But in any case, we still can’t say that the income tax is inherently immoral, just that a certain amount of excess tax is immoral.

    Which leaves the possibility that maybe the income tax is immoral because it’s always dominated by some other form of tax, like, as you suggested, a consumption tax (I’m skipping the “low” for the reasons in the previous paragraph) because this is less intrusive. So is a consumption tax really less intrusive than an income tax? You’ve got to tax someone under any tax plan, and here you would end up taxing retail establishments and those sorts of people (alternatively, you can implement consumption tax under our current income-tax system by making all saving tax-deductible). Those retail establishments would probably have to keep some additional records if we switched to a consumption tax. Also, to be revenue-neutral, the rate would probably be high enough that there would be a lot of evasion, so the tax inspectors would be going around auditing businesses just as they audit individual taxpayers.

    I suppose you could say that individual 1040 intrusiveness is worse than intrusiveness into business records. That sounds O.K. as a commonsense matter, but it’s a funny position for a libertarian to take: all records are personal, whether it’s my salary and charitable donations or my relationship with my customers and suppliers. Maybe it’s the right position, but I’m not fully certain. Is that what you believe?

    Like

  4. Sasha,

    I agree with you that the best alternative possible isn’t immoral. But, I think there are objective moral considerations that should guide what we consider to be acceptable in the long term. If abolishing all taxes tomorrow would lead to more moral problems than a transition would, then I favor a transition. But, I don’t accept a permanent tax as an acceptable position. I think we should be trying to eliminate them all. That’s what I mean when I say that they are all immoral. I think it’s immoral to ignore the moral problems inherent in them, and to refuse to want to eliminate those problems (when that’s possible, better than alternatives, etc.).

    As for whether business auditing is less intrusive than individual auditing…I haven’t thought about it very much. I guess I would agree that it’s less of a violation, but I only propose it as an interim improvement. I still oppose it as an entrenched, permanent, policy. Also, I suspect that it’s possible to have an auditable mechanism (with encryption, checksums, etc.) that confirms totals without revealing customer/supplier information.

    Interesting discussion. I had originally only made this post to be timely (April 15), and to mention the free cookies from Subway. Now you’ve got me thinking deeply about morality and politics.

    Like

  5. Ok, so if we agree that a consumption tax is better than the income tax we can cooperate in advocating for that change. I’m sure we can also agree on many spending and regulation cuts, asset sales, etc. that would further reduce the required revenue and raise the wealth of citizens. After we get there, maybe we should see if we can agree on further improvements.

    Out of curiosity, what do you see as the major obstacle(s) to a moral taxless world?

    Like

  6. The spending and regulation cuts we can agree on, not because of any judgment about the morality of the income tax, but because those regulations and spending programs might simply be non-rights-protecting (and therefore immoral even if they cost a nickel), or because they do protect some rights but we might agree that they do so to an extent that’s smaller than the immorality of the underlying taxation. Similarly we could agree on asset sales that would allow the government to do the same amount of stuff with less, so they could improve taxpayer rights by cutting taxes without affecting anyone else’s rights, so that’s a net positive.

    On the bigger question of obstacles to a moral taxless world, first let me recapitulate some basics: I’m not a deontological libertarian (translation: “duty-based”). Libertarianism for me isn’t a set of prescriptions for individual behavior, as in “You should never violate anyone’s rights” (e.g. the “non-aggression axiom”). Moral anarcho-capitalists tend to be deontological: any state is worse than no state at all because a state requires taxation or regulation. But what about a hypothetical crime victim caused by not having police? Not a sufficient reason for a state because his blood isn’t on our hands. Setting up a state, stealing people’s money, wrongly locking people up or killing people, etc., means the blood is on our hands, and the overriding principle is that we not be tainted. (These are moral anarcho-capitalists [maybe Rothbard?, though I haven’t read him], not pragmatic anarcho-capitalists like Friedman or Barnett.)

    So, what’s libertarianism to me? It’s a way of characterizing states of the world as more or less moral or immoral. It’s a rights-based view, so all rights violations bring about some degree of immorality, and any immorality should be describable as a violation of some right. (Of course not all rights-based views are libertarianism; libertarianism requires certain rights like self-ownership and rules out certain other rights like welfare rights.) There’s no overriding duty to protect people’s rights, but there is a duty to act so as not to decrease total morality. So you generally can’t steal people’s money, unless you’re using the money to protect other rights to a greater extent. (I’d go further and say we should maximize total morality, so if you steal people’s money you must protect others’ rights to the maximum possible extent; in other ways, protect a given amount of rights in the most efficient way possible.)

    The result is that you get a libertarian justification for government. (If government can do it, why can’t anyone do it? The answer: Yes, everyone could do it. In practice, though, once the government is already doing something, it generally won’t be optimal for someone else to also get into the same business because, provided the government is minimally moral and competent, an extra actor won’t be protecting that much in the way of rights, and the effort, including the extra tax theft itself, will be too expensive. So this will produce a monopoly on the use of force as a moral imperative, again provided the government is “good enough.”) In fact, this mandates government on libertarian grounds if that’s the most efficient way to protect the most rights.

    One can still be a pragmatic anarcho-capitalist on these grounds, a la Friedman or Barnett, if you’re pessimistic enough about the competence or morality of government, or if you think that *some* government is morally good but then you won’t be able to control a ballooning of government to an extent that makes it worse than nothing at all.

    Myself, I’m not so pessimistic about government — I think that the amount of government we have now is morally better than no government at all, though there’s still residual immorality because there’s still that requirement to *maximize* morality, and we can do even better with a smaller government. There is a slippery slope but I don’t think it’s slippery enough to make a likely future government better than anarchy. But I think the U.S. military does a lot of good, and it requires taxation; I don’t think the military would be anywhere near well-enough funded without taxation. Similarly, I think the police and courts do a good job, though in that case there’s at least a plausible case that private organizations could do an O.K. job. Even the regulatory agencies aren’t all bad: some of our environmental laws actually do prevent real harm, and probably the common-law system as it is today couldn’t handle the problem of real harm from diffuse injurers (like car pollution). There’s an extra distributional issue: even if everyone could get decent justice by paying for it, a free (that is, unpriced to the recipient) justice system might be morally better; that is, it might be more moral to (on net) take money from me to pay for your protection than to make everyone pay for their own protection.

    And so, I don’t believe in the immorality of taxation generally. I’m also not sure that a consumption tax is morally better than an income tax, for reasons I sketched above. Generally, I would still be on board with intermediate steps, and get off the train later. But not all conceivable intermediate steps would be good from my perspective. For instance, some anarchists might actually resist making the present system more efficient, because they might want to make the system look even worse so hasten its demise. (This logic often doesn’t work, but I wouldn’t rule it out in all cases. Not that I’m implying that you support that.) While I might favor making it more efficient. (This too isn’t a done deal, but at least it’s a conceivable wedge between gradualists and extremists.) I might favor certain reforms that make raising revenue very visible and very unpopular, but at a certain point I might think taxation has become too unpopular. And on the spending and regulation side, I might favor the retention of some expensive programs.

    Like

  7. It’s interesting. I think you have more confidence than I do that the state (with democracy and institutional checks) will limit its power in the long run, so you don’t seem too worried about taxation; and you have less confidence than I do that most people would voluntarily contribute to national defense (which is to you so obviously valuable that it justifies stealing people’s money to fund it).

    I don’t think everybody is a brilliant philosopher, but I think most people (when given the chance) will recognize and adopt good values and make decent choices. You seem to have some of the same elitist position as both the left and the right: that the common folk are nasty and stupid and the only way for us all to survive and prosper is to force them to do what we (their betters) want. Well, it’s been my experience that most of those “betters” who want to impose their schemes on others are not better at all.

    I really think that if good values are dominant, then individual choices (helped by social pressure) will provide adequate funding for proper state functions. Especially in a world where the generation of wealth is not impeded by rights-violating state actions the way it is today.

    Check out this article:

    http://libertariannation.org/a/f21l4.html

    for some suggestions for how public goods could be voluntarily funded.

    Like

  8. “You seem to have some of the same elitist position as both the left and the right: that the common folk are nasty and stupid and the only way for us all to survive and prosper is to force them to do what we (their betters) want.”

    I guess I’m not seeing Sasha saying this anywhere.

    All I see is his saying “A legal system should violate some libertarian rights in order to prevent the violation of more libertarian rights.” From Sasha’ posts, I don’t know if the “should” should actually be “must” or perhaps “can.”

    This position would, for example, murder one person to prevent two murders, but not murder one person to save two lives. It isn’t saying “We are doing this for your own good” but “We are violating you because it will prevent more and worse violations.”

    Am I getting this right, Sasha?

    For example:

    A private police force would impose no rights violations on the world. A governmental police force would impose rights violations: all the taxes that go towards the police. This leads to large “negative” in the consequentialist calculation.

    The government system would have to be at least that much better at preventing rights violations than the private system. So a merely OK private system would be preferred to a superb government system, even if they cost the same: private payments aren’t rights violations, while taxes are.

    Similar arguments show that only when governmental solutions are significantly better at preventing rights violations (not just better consequences or welfare, but fewer and better libertarian rights violations), would you want the governmental solution over the private one.

    Having said all that…

    I actually don’t agree with the idea that I should murder one person to prevent the murders of two people, or support a system that does. Sasha, do you have any examples that could help me think about it?

    Say I am the sheriff of a town, and a murder has occurred. I know for a fact that the criminal escaped and is long gone, never to return. Three suspects have been cornered by a mob, which is planning to lynch all three. I pick one at random, step up and say “I have incontrovertible proof that this person is the murderer” and shoot him in the head, thus minimizing rights violations (one murder vs. three). Is this consistent with your “maximizing morality” duty?

    Say I along with others are held by a terrorist as hostages. The terrorist tells me that, if I kill that hostage over there, he will let everyone else go; otherwise, he will kill us all. The hostage is saying “Please don’t…” over and over. I kill him, thus minimizing rights violations (one murder vs many). Is this consistent with your “maximizing morality” duty?

    Like

  9. Bill,

    I wasn’t trying to characterize Sasha’s entire position. I was just (over-?)reacting to this: “But I think the U.S. military does a lot of good, and it requires taxation; I don’t think the military would be anywhere near well-enough funded without taxation.”

    You bring up a good criticism of utilitarianism in general (and Sasha’s rights-violation brand in particular). These hypotheticals certainly seem to go against our moral intuitions.

    Of course, it could be that our moral intuitions are wrong. I would certainly resist murdering someone, even to prevent the murder of more other people…but if you raised the numbers of those other people then I suspect there’d be a point where I would decide that the one murder would be the right thing to do. It could be that I’d be doing a brilliant inexplicit moral calculation and changing policy at just the right time, or it could be that I was just wrong to be so squeamish when the numbers were smaller.

    I think that the truth is that the right thing to do depends on a lot of information not available in our hypotheticals; and even if we had the information, we don’t know how to figure it out yet. I think that as time goes on and our knowledge improves, we’ll learn more about how to do this.

    It’s hard.

    To me, this is a good reason to limit the power of governments as much as is reasonable, because I’m very skeptical that they’ll make the right decisions in easy cases, let alone hard cases.

    Like

  10. O.K., let me respond to Gil’s point (three posts up) first, then Bill’s point (augmented by in Gil’s commentary immediately above) a separate post.

    Gil characterizes and responds to my point like so: “I don’t think everybody is a brilliant philosopher, but I think most people (when given the chance) will recognize and adopt good values and make decent choices. You seem to have some of the same elitist position as both the left and the right: that the common folk are nasty and stupid and the only way for us all to survive and prosper is to force them to do what we (their betters) want. Well, it’s been my experience that most of those ‘betters’ who want to impose their schemes on others are not better at all.”

    First, let me get a minor point out of the way, minor not because it’s not important but because I agree with it. I agree that most so-called better schemes aren’t actually better, and that this is a good reason to limit government. It’s also not a disagreement with my fundamental point, which is that actually better schemes may exist. As I remarked above, you can believe (as I do) that actually better schemes exist and still believe (as Friedman and Barnett-type pragmatic anarcho-capitalists do) that you should never force their implementation. In other words, let’s separate the existence-of-a-better-way question from the should-you-implement-it question.

    So let’s go back to the basic point: When I claim that people wouldn’t support the military in a voluntary world, (1) am I saying that the common folk are “nasty and stupid,” and (2) if I am saying so, am I wrong?

    It’s interesting how Gil and I characterize the same behavior differently. Not supporting the military, Gil calls “nasty and stupid” and not a “decent choice.” I hang out with economists on a daily basis, and we have quite a different label for it. We call it “individually rational.” You know the basic argument about free-riding with public goods: when you choose how much you yourself should contribute to some social goal like national defense, you take as given how much everyone else is contributing, and no matter how much that is, your own contribution will make virtually no difference to the goal in question. But it will make a huge difference to you, who may have a spouse and kids to take care of and a mortgage to pay for and a pension plan to fund. Therefore, an economist doesn’t consider it “individually rational” to choose to contribute to a social goal, because it doesn’t actually make you better off.

    Of course, economists don’t say national defense shouldn’t be funded. What they say is: lots of people acting to make themselves individually better off doesn’t necessarily make everyone worse off. No shame in that, the only problem is expecting that individually justifiable individual actions have to come together to make an ideal world.

    Now I’m not a utilitarian like these guys are, but my view is broadly similar in the following way: In a voluntary world, there’s no obligation to fund the military, for the same reason: taken everyone else’s contribution as given, your contribution WILL NOT AFFECT THE GOAL, at least not to such an extent as to make any kind of moral difference. It wouldn’t even be perceptible. Let’s sit an individual guy down in a voluntary world and tell him: How can you let down your fellow man? He’ll say his contribution makes no difference, and he’ll be quite right. He’s being “individually moral.” Even if he tries to maximize the morality of the world as best as he can, the right answer would not be to donate to the military. Nothing nasty and stupid there, nothing indecent. People are great, they make fine choices. I trust them! (Well maybe I don’t really, but if I did, same result.)

    But… if everyone donates that kind of money, then you can qualitatively improve the military. If I’m right that donations would be piddly in a voluntary world, “forced donations” would make the military into a powerful fighting force. If I’m wrong and voluntary donations would be larger through social pressures (but suppose I’m not *that* wrong and donations aren’t comparable to what we have now), forced donations would still make a significant difference, enough that we protect rights all over the world to a greater extent than the immorality of having that money taken away.

    (I know there are certain proposals to have voluntary mass donations, basically point 4 on the “Funding Public Goods” website you had a link to above: proposals for certain funds where you can pledge some money but at the same time you’re not on the hook unless the total collected exceeds $X, so you can be guaranteed not to donate unless it makes some difference. I’d be interested to see some of this in practice on a limited scale, and see how bad free-riding is in this system. The success of such programs is something that could conceivably win me over, if we try it first on a small scale. For the moment, I’m just assuming that these things don’t exist. As for points 5 and 6 (privatization and packaging), by hypothesis those don’t exist for national defense; points 2 and 3 (conscience), I’ve addressed in the individual versus global morality point above.)

    So all I’m saying is: I reject the characterization that I think people are so wrong-headed that they wouldn’t fund the military without force. I think it’s morally right to not fund the military in a voluntary world, and it’s also morally right to set up a system that forces people to give up their money to fund the military. There’s no necessary connection between people’s individual moral actions and a moral world.

    Bill’s point, I’ll try to address tomorrow.

    Like

  11. “You bring up a good criticism of utilitarianism in general (and Sasha’s rights-violation brand in particular). These hypotheticals certainly seem to go against our moral intuitions.”

    Just to be clear, the two hypotheticals I used actually pull my intuition in opposite directions, and my students often disagree violently on both of them. They weren’t meant as criticisms necessarily, just examples.

    Sasha’s position seems to give a clear answer to both, and I wanted to know if there were any other examples that could possibly help me understand and agree with it more.

    Like

  12. Sasha,

    In your model of human economic behavior, what percentage of people would tip in a restaurant where they never expected to return? Or would contribute to large charities that they believe do important work? Or would volunteer to join the military for reasons other than economic opportunity?

    How does that compare to the real world?

    I think that the definition of “individually rational” needs to be expanded to be useful. People don’t just want to maximize their monetary position. They have psychological needs too. They want to feel like they are part of something great. They want to express their support for things they think are good, and are willing to trade dollars to do so. They want to live in a world where people can collectively and voluntarily help each other and are often willing to make a “leap of faith” that enough other people are like them that they can help get the job done even though some will not choose to cooperate and help ease the burden of their fellows.

    I think it can be individually rational to contribute to large charities that one thinks worthwhile, or to volunteer for a cause you value despite having to sacrifice monetarily, or to tip if you think that tipping is a good thing. So, I think it would be individually rational to contribute to the military if you think it’s performing a vital function.

    Like

  13. Bill,

    Sorry. I guess I was projecting how I respond to those, and similar, hypotheticals. I think that Sasha’s formula is wrong. I also think that the dogmatic non-aggression-principle is wrong. I suspect that the right way to act lies somewhere in between, but I can’t tell you exactly what it is. And I don’t think anybody else can either.

    Like

  14. In a simple model of rational utility maximizers, no one would tip in a restaurant where they never expected to return. Is tipping in such a case totally inconsistent with rational utility maximizing? Maybe not, because in many cases the waiter gets to see the tip you leave and then gets to react. Once I’ve heard a waiter say loudly that the tip was inadequate. Other times the waiter has thanked me personally for the big tip (this was in Belgium, where the waiter recommended a beer and I didn’t like it so he took it off the tab). In face-to-face interactions, you feel good when you give someone something and feel bad when you don’t, and that can legitimately affect your calculations when you maximize utility. Similarly for giving to a homeless guy on the street. I wouldn’t expect tipping in never-return circumstances, though, when the tips are invisible (like if you pay at the counter and add the tip onto the credit-card receipt, say).

    Two extra points. First, I don’t think there’s anything immoral about not leaving a tip. It’s just a custom we have. So, back to your comments above, if no one tipped, I wouldn’t characterize this as people being nasty and stupid and not having decent values. Second, O.K., let’s grant that tipping is an example of non-utility-maximizing. (Though, as I’m fond of pointing out, if we can’t explain something rationally, it could be that we’re just not characterizing the game fully enough.) Behavioral economists are trying to detect regularities in people’s deviations from rational maximizing. It seems that a lot of such deviations happen in circumstances where the framing effects are large (that is, you set up face-to-face interactions, or you embed the game in a particular social context), and where the dollar amounts involved are small. As the dollar amounts get large (as they would be in national-defense donation), especially when the behavior is repeated (donations every year) and also if the behavior is secret (you don’t have to show people your donations), people “discover” the rational solution as time goes by.

    So even if tipping is extremely widespread, the success of tipping, which is comparatively insignificant, tells me very little about the feasibility of this scheme for national defense. It sounds more like something that lots of people might do in a post-9/11 situation when everyone is very patriotic but which quickly wears off. It would vary with the popularity of the current foreign policy venture (which you might say is desirable) and would also vary with the ups and downs of any given foreign policy venture (which I would say tends to be undesirable), and more importantly, would, I think, have a permanently low level.

    Like

  15. Sasha,

    “Behavioral economists are trying to detect regularities in people’s deviations from rational maximizing. It seems that a lot of such deviations happen in circumstances where the framing effects are large … where the dollar amounts involved are small … [and not] [when the dollar amounts are large] … when the behavior is repeated … [and when] the behavior is secret.”

    Do you have a good source on this? When I say this to the people at e.g. Crooked Timber, they laugh and point πŸ™‚

    Like

  16. Hey Sasha,

    You indicated you’d reply to Bill’s earlier post today. I didn’t mean to interfere with that.

    Also, if you (like me) were reminded of the tipping scene from Reservoir Dogs

    Tipping (as well as donating to charity, and volunteering, etc.) is extremely widespread, and persistent as well. You call it “comparatively insignificant” and would like to ignore it, but I think it represents a serious criticism of the theory that supports your assertions about the necessity of taxation. You can always alter your theory to save it (e.g. “Except for things that are small amounts, or start with the letter ‘T’…”), but at some point that stops being honest.

    By the way, it’s always struck me as odd that the vast majority of people support the government forcing them to contribute to projects, but even though they think it’s worth forcing people to contribute to, and know how popular that is, they don’t think that people would choose to contribute voluntarily. If you think you should be forced to pay your “fair share” for something, why don’t you simply think you should pay it? Why don’t you expect lots of other people to think this (especially aided by lots of social pressure)?

    I understand the concern that people don’t like free-riders, or don’t want to feel foolish for paying for what they don’t have to; but this doesn’t stop them from doing lots of voluntary contributing. Why is it that projects that governments pay for with tax revenue today are viewed as so wildly different?

    Like

  17. By the way, I hope that nobody gets the impression that I consider Sasha to be nasty or stupid or indecent or dishonest or any other of the harsh words I use when describing some people who take some positions.

    I think Sasha is awesome, and I love discussing things with him.

    Like

  18. I need to comment on a couple of things… where to begin?

    1. Tipping may be widespread and persistent, but how much money does it account for? I tip 15% for restaurant meals and somewhat less for cab rides; a dollar or two when I get my hair cut; and not much else. Maybe half the restaurant meals are in places that I go to frequently, and so are almost all the hair cuts. So it’s really very little, and it’s all in face-to-face encounters. It’s going to take a *lot* more than that to convince me that people could give a lot, like serious proportions of their income, and repeatedly, to the distant U.S. Treasury.

    Charity and volunteering are interesting examples. Douglass North, the economic historian (Nobel, libertarian), is, as any good economist, a materialist πŸ™‚ (good idea for a future discussion, but not now: do ideas *really* matter?), and spends a lot of time discussing how economic historians can account for mass movements like the civil rights movement or socialist movements, apparently driven by volunteer idealists who operated at substantial personal risk. His answer is consistent with mine, and consistent with the web site you linked above: you should look at how ideological organizations “bundle” their activism, by associating it with tangible private benefits that aren’t available to those who don’t contribute. The Nature Conservancy gives you mugs and calendars; Cato sends you a lot of little Constitutions; the anti-Vietnam War movement provides young people with mating opportunities. Again, it looks like those sorts of movements stand out because they’re out of the norm. Maybe the Department of Defense could send me Pentagon calendars and invite me to special Pentagon events, but the more money you’re talking about, the more serious these benefits need to be, and again it would take a lot to convince me this is workable.

    By the way: “at some point that stops being honest”??? “Things that are small amounts” is completely different from things that start with the letter T. My theory of how people act is basically a form of materialism, which is consistent with economists’ practice, and tipping is interestingly precisely because it deviates from simple materialistic models. Behavioral economists take deviations from rationality very seriously, and even they don’t think you have systematic deviations from rationality when large amounts are involved and you don’t have face-to-face interaction. (Sorry Bill, I don’t have a specific source of this… I’ll see what I can find.) All models of how people act are false, in the sense that there’s always something they can’t explain, but successful theories are the ones that explain most things and also have a plausible explanation of why they don’t explain the things that they don’t explain. Now you can quibble with materialism if you like as a descriptive theory of how people act, but you really shouldn’t say it’s dishonest to save materialism by describing what makes observed volunteerism “special.”

    2. Pro-forced contribution types say (a) that goal X is really important, so important that it needs supporting with force, and (b) that people couldn’t be convinced to contribute voluntarily. You say this is “odd.” I did spend some time some posts up explaining why I don’t think this is odd at all but in fact is normal. In fact, I have two points.

    The first is the basic economists’ point, which doesn’t depend on any moral theory. Take some goal like national defense which is hugely beneficial for personal wealth. Because we assume that people are materialistic, we predict that people will give very little (not nothing, but very little) to this goal. You can quibble with this point if you’re not into materialism.

    The second point is a libertarian moralists’ point. Take some goal like national defense which is hugely beneficial for human rights, and assume that many people aren’t fully materialistic but also take into account doing the right thing. I still claim that giving very little to national defense is the right thing. First, I’m making an assumption about national defense, which is that you need to be well-funded enough to get any decent effect at all. For instance, in the Cold War context, given that you have a huge Soviet Union, you need to have a defense that’s appropriate to that threat; and the effectiveness of your own defense is very low and horizontal until you get to some pretty high level. Second, as a moral matter, people probably do (and definitely should) take others’ contributions as given when they decide on their own contributions. So it’s perfectly rational, and indeed moral, to say, “I won’t give to national defense because I want to support human rights, and the U.S. military won’t be effectively defending human rights unless it’s funded at least at $X, and my fellow citizens, alas, aren’t funding it at that high a level.”

    Now social pressure could affect the behavior of your fellow citizens. Fully effective social pressure would make your fellow citizens give a lot, and then you, who want to do the right thing, would choose to give because then your contribution would make a difference. But how would you react to social pressure? I would say, “Look, I still have my expectations about what my fellow people will do. Unless you change that, social pressure is useless: I’m already motivated to do the right thing, it’s just that the right thing is a zero contribution at current funding levels.” If everyone does the same, you can have low military funding even though everyone agrees that the U.S. military is a powerful rights-protecting machine and even though everyone has good and decent values and wants to do the right thing.

    In other words, what it means for people to act morally depends on the institutional structure they’re in. So it could be quite moral to not contribute in a voluntarist world. But at the same time, it could be even more moral to set up a world of forced contributions, because once your fellow human beings *have* to give, then you’re at a level where your own contributions will make a meaningful moral difference. Then it’ll be moral (as well as compulsory) to give to the military. This isn’t a formal model (I could come up with one if you insist), just an intuitive statement of why it doesn’t seem odd to me to say that everyone recognizes a worthy goal but at the same time doesn’t want to give unless everyone is forced to give. Of course, in my case, as I’ve mentioned, I happen to also be a materialist; I don’t think people will choose to do the right thing on a grand scale with serious money; I’m not optimistic about the power of rational argument in getting people to do the right thing (as opposed to the utility-maximizing thing for them); and I don’t think there’s in general much of a correlation between people’s willingness to contribute to defense and whether the system deserves to be defended (there may even be an inverse correlation, but I don’t think I could support that statement with facts, so I’ll just confine myself to saying that I don’t think there’s a positive correlation).

    O.K., more later.

    Like

  19. Ok.

    I don’t think we can settle this in an online discussion. I think that a clever combination of the ideas in the web page I linked to can provide adequate, voluntary, funding; while you are skeptical about that. I don’t really blame you. You might be right.

    What I’d like to do, then, is figure out where we agree, and whether there is a productive path that we agree on that will take us closer to where we’d both like to be. Say, for example, we try to fund 5% of national defense voluntarily and if that succeeds, progress to 10% the next year, etc. We would use all of the techniques discussed in the link, like guaranteeing that funding would have to reach a certain level or the money would be returned, social pressure, bundling, corporate sponsorships (“Coca-Cola, the official soft drink of the Navy SEALs”), etc.

    I think Ayn Rand proposed that enforcing private contracts could be voluntary and fees collected could be structured to pay for both the contract enforcement and defense. Does that seem like a workable bundling scheme to you (contracts probably run into the trillions of dollars and even a small percentage fee would add up to a lot of money)?

    I also have issues with $5 presents for $1000 donations being materialistic explanations, but I’d first like to know if you think there’s a viable path towards a tax-free world that leaves you (and others who think similarly) with enough security to try it.

    Like

  20. Sorry it’s been a few days… let me try to quickly catch up. But instead of answering your last post, Gil, I’ll start out with a few loose ends from before.

    1. On my “moral consequentialism.” I said before that libertarian rights theory tells us how to evaluate the morality of different states of the world; worlds with “more rights violation” (however you measure and define that) are less moral than worlds with “less rights violation.” Based on that characterization, I stated the following moral rule: “Act so as to maximize the morality of the world.” This would allow you to violate people’s rights provided you thereby protect other people’s rights to a greater extent. This is a libertarian justification of government.

    The main competition, among libertarian views, is the deontological view, according to which libertarian rights theory directly tells you how to act: “Act so as to not violate anyone’s rights.” This involves accepting a sort of action/inaction distinction, where you don’t have a duty to prevent someone else’s evil because that’s his fault, while if you tried to prevent it and thereby violated someone else’s rights (like if you took people’s money to set up a police force), you’d be at fault, and you should avoid the latter. This view, I think, leads to anarcho-capitalism.

    There are intermediate views, like Gil’s, but I’ll save that for point #2.

    Bill presented me with a few hypotheticals to test my view; here they are:

    Bill: “I actually don’t agree with the idea that I should murder one person to prevent the murders of two people, or support a system that does. Sasha, do you have any examples that could help me think about it?

    “Say I am the sherriff of a town, and a murder has occurred. I know for a fact that the criminal escaped and is long gone, never to return. Three suspects have been cornered by a mob, which is planning to lynch all three. I pick one at random, step up and say ‘I have incontrovertible proof that this person is the murderer’ and shoot him in the head, thus minimizing rights violations (one murder vs. three). Is this consistent with your ‘maximizing morality’ duty?

    “Say I along with others are held by a terrorist as hostages. The terrorist tells me that, if I kill that hostage over there, he will let everyone else go; otherwise, he will kill us all. The hostage is saying ‘Please don’t…’ over and over. I kill him, thus minimizing rights violations (one murder vs many). Is this consistent with your ‘maximizing morality’ duty?”

    This is Sasha speaking again: Both of these hypotheticals have something in common: how we respond to them doesn’t only affect this case but also affects cases down the road by changing people’s incentives. For example, take the second hypo, with the terrorist holding hostages. If we do what the terrorist says, we do, by my theory, maximize morality in the given case. But that’s the wrong timeframe to be looking at: instead, we should realize that if we do what the terrorist says, then he’ll know that he can always get us to do what he wants by blackmailing us in this way, and he’ll keep doing it with a different hostage each time, for as long as he wants. On the other hand, if we refuse to negotiate, then he may realize that he won’t get any useful concessions out of us, and be discouraged next time. Again, in the static case, it doesn’t look like it maximizes morality because all the people die the first time around, but when you properly look at it dynamically, then refusing to negotiate might be a better course. Of course, a third course: fighting back and losing, say, 10% of the village, but really reducing the chances that he’ll strike again, might be an even better choice.

    Similarly, take the sheriff example. Killing the one suspect instead of the three seems to maximize morality in the static case, but when you look at it dynamically, the sheriff’s action would tend to legitimize vigilantism and lynch mobs, and moreover, to the extent that innocent people get punished, that dilutes deterrence generally for the future. While on the other hand, if the sheriff lets the mob kill all three (assuming he can’t stop the lynch mob entirely) and then prosecutes the members of the mob, that may prevent future lynch mobs. So there, too, my theory says to look at all the future consequences, not just the consequences in the one case.

    A better example would be, for instance, whether it’s acceptable to have a justice system that occasionally punishes innocent people (through natural error) because that’s the price to pay for punishing the guilty. Or to have policemen who can shoot at fleeing suspects or engage in high-speed car chases even if sometimes they might hit an innocent bystander. Or if we can bomb an enemy country even though we know for sure that we’ll hit some predictable number of innocents. In all these cases, one frequent move is to say “It’s O.K. as long as we’re not trying to hit the innocents and as long as we’re trying to minimize the deaths of innocents.” But that’s not satisfactory: surely, even if you’re trying to minimize the deaths of innocents and aren’t trying to hit them, at some point it becomes morally unacceptable if the best bomb that you can use to target Osama will also kill a million people around him? So I’d rather say in this case: we’re intentionally setting up a system that will violate the rights of some people but the rights-violation is still lower than it would be (due to criminals) if we didn’t have that system, and that’s the least rights-violating system we can think of to achieve the rights-protective goal.

    Also, if you want an even more streamlined example, just imagine you (or, say, you and your whole family) are at the bottom of a hill and there’s a car running toward you down the hill. You can’t move out of the way, but you can push a button and set off an explosive charge on the hill which will vaporize the car and save you. Unfortunately, the evil mastermind who set the car in motion is long gone, and some innocent guy (another victim of his) is locked in the car. Should you set off the explosive charge? An analogous example: would it have been O.K. to shoot the airplanes down, killing everyone aboard, before they hit the World Trade Center?

    O.K., this has been long enough on this point, so point #2 goes in a new post.

    Like

  21. Now, on to point #2. I had said above that another candidate philosophy is Gil’s view:

    “Of course, it could be that our moral intuitions are wrong. I would certainly resist murdering someone, even to prevent the murder of more other people…but if you raised the numbers of those other people then I suspect there’d be a point where I would decide that the one murder would be the right thing to do. It could be that I’d be doing a brilliant inexplicit moral calculation and changing policy at just the right time, or it could be that I was just wrong to be so squeamish when the numbers were smaller.

    “I think that the truth is that the right thing to do depends on a lot of information not available in our hypotheticals; and even if we had the information, we don’t know how to figure it out yet. I think that as time goes on and our knowledge improves, we’ll learn more about how to do this.

    “It’s hard.

    “To me, this is a good reason to limit the power of governments as much as is reasonable, because I’m very skeptical that they’ll make the right decisions in easy cases, let alone hard cases.”

    This is Sasha speaking again: Gil, I actually think your view is very similar to mine, but you’re resisting the way I’m packaging my view rhetorically.

    I hope my previous post cleared up some issues. As I said above, my view makes explicit moral trade-offs, but also says that you should consider all future consequences, not just the results in the immediate case. This has two main consequences.

    1. I say, “Act so as to maximize world morality.” But how do you know what maximizes world morality? Shouldn’t we take the Hayekian critique, that is, the knowledge problem, seriously? Yes. And that’s why my moral imperative, in most cases, tells you to do nothing in particular. Most actions have some effect on the morality of the world. But in most cases, we don’t know what effect it has. So nothing is required of us in these circumstances. As you say, Gil, the right thing to do depends on a lot of information not in our hypotheticals. My view is more relevant in the Big Decisions, like whether to go to war or establish a police force or the like, where we can see clearly what rights certain activity violates, have a decent idea of possible ways to prevent those rights violations and their probable costs, and so on.

    2. I say that we should consider the dynamic consequences, not just the immediate case. That means that whenever you’re wondering whether to take some action that increases government power, it’s perfectly legitimate, and in fact required, to take into account the possibility of government abuse of power in the future. As you say, Gil, that’s a good argument for limiting the power of governments.

    Where do we differ on this philosophical level? Gil, you often state your views in an absolute form and then qualify them with a vague exception. For instance, in your 4/25 post on the draft, you say that the draft is immoral because “slavery is wrong,” but… “I admit that in emergency circumstances I’d probably resort to hijacking somebody’s life in order to save my own or the lives of others (if I had no better options), but I’m extremely resistant to institutionalizing this as a government policy.” Similarly, in your 3/25 post on antitrust, you write: “It seems to me that voluntary activity should not be punished or restricted, in general, even if those restrictions would lead to improvements for other people. I might agree to exceptions to this rule under rare, emergency, situations; but I think it’s a mistake to institutionalize this sort of power in the hands of people who aren’t me.”

    An advantage of this approach is that it allows you to take a strong moral line, and that can be good, even if the moral line has exceptions, because principles are things that people should absorb for non-exceptional cases, and they’re easier to absorb if they’re easier to formulate. My problem with this approach is twofold: (1) How do I recognize a “rare, emergency” situation? If I come to you with a proposed government intervention, I don’t have any clue whether it would pass your “rare, emergency” test or not. (2) Strictly speaking, if there are exceptions, your absolute rule is false. The absolute rule may be the correct rule to follow in the non-emergency situations, but a complete theory should contain within itself an account of what the exceptions are, what the proper rule is for those exceptions, and what it is that makes the exceptions exceptional.

    I say “complete theory,” and of course it’s silly to require every blog post to contain a complete theory. But I think that if you’re going to bring up the possibility of exceptions, you should at least hint at why this case isn’t exceptional. And if you don’t bring up the possibility of exceptions, you should be prepared to be argued against as an anarcho-capitalist, and therefore either be prepared to argue that position or be prepared to trot out the exceptions part of the theory in the comments after the first guy raises anarcho-capitalism.

    Anyway, my approach is kind of the opposite: I drop the absolutist part of the theory entirely and I treat every case as one where balancing is appropriate. But my theory of balancing, in practice, adopts certain beliefs about the tendency of governments to abuse their power and other slippery-slope-type concerns. Moreover, my balancing is explicitly only a RIGHTS-balancing theory, so by its very framing, I’m ruling out any balancing of rights versus utility, for instance if people say, “We should have antitrust to guarantee lower consumer prices and higher quality” or something similar. (So in some cases, my statements may even look more strongly libertarian than yours.)

    But we may still come to the same results on all questions under either of our approaches. Now of course we may still differ in the implementation, and that’s where I’ll come in my next post with my response to your user-fees approach.

    Like

  22. Now let me have some tentative comments on your last post, Gil. You say:

    “What I’d like to do, then, is figure out where we agree, and whether there is a productive path that we agree on that will take us closer to where we’d both like to be. Say, for example, we try to fund 5% of national defense voluntarily and if that succeeds, progress to 10% the next year, etc. We would use all of the techniques discussed in the link, like guaranteeing that funding would have to reach a certain level or the money would be returned, social pressure, bundling, corporate sponsorships (‘Coca-Cola, the official soft drink of the Navy SEALs’), etc.

    “I think Ayn Rand proposed that enforcing private contracts could be voluntary and fees collected could be structured to pay for both the contract enforcement and defense. Does that seem like a workable bundling scheme to you (contracts probably run into the trillions of dollars and even a small percentage fee would add up to a lot of money)?”

    I would have no problem with trying to fund certain activities voluntarily and see where we go from there. I’d be very glad to stop supporting tax funding of these activities if I see that one can have voluntary funding. With something like defense, I’d want to start small, because the stakes are very high if the project fails; and in fact this is my general gradualist position: if you’ve got a project which only works if we get society 100% there, then I’m not on board. On a side note, I think we should also take seriously the possibilities of improper influence if, say, some program relies too heavily on, say, Coca-Cola sponsorship and then starts listening too attentively to what Coca-Cola wants. The way I judge all these programs is according to the whole philosophy I sketched above: the criterion is total rights protection. If voluntary funding means that certain rich people fund all of a certain project and then dictates what goes on in a way that undermines the project’s rights-protective mission, then it may be more rights-protective, on balance, to stick with the system of intrusive, forced contribution. (This is just a hypothetical; obviously, corruption goes on even under the current tax system, but I’m just sketching considerations that, if true, might make the current system still preferable.)

    As for user fees, we may have different views on their moral status. Your suggestion (your characterization of Ayn Rand’s suggestion) involves collecting a fee for the right to have your contract enforced, and then using those fees to actually enforce the contract and also possibly to run unrelated programs (defense). The main advantage of user fees is that they’re not intrusive in the way that income taxes are. Let me work toward a theory of their disadvantages.

    First, by my moral theory, breach of contract is immoral. Everyone is entitled to make whatever contracts they like and they have a right to the other party’s performance. If everyone performed their contracts, we might not need a court system. We set up a court system to remedy a situation of rights-violation. We tax people generally (leading to some rights-violation), but we then use that money to enforce contracts (leading to some, hopefully more, rights-protection). Now suppose we fund contract enforcement through user fees. (The same argument applies if we say instead that people should just pay for their own private contract enforcement through private dispute resolution agencies, though of course then you can’t use the fees to fund defense.) It’s “voluntary” in a sense (you don’t have to pay if you don’t want a guarantee that your contract will be enforced), but it’s still an expenditure that’s caused by the wrongdoing of the other party, so it’s not obvious that it’s just for the wronged party to bear the cost of undoing that wrongdoing. Suppose, for instance, that some people can’t afford the fees so they make contracts and then are the victims of breach. Even if you make the fees a percentage of the amount at stake, some marginal projects may decide to go through without protection, so you might still have some unpunished rights-violation.

    None of this says which world is preferable, and maybe a user-fee world really would be better, though I don’t have any information to judge one way or the other. But just two points: (1) Again, the ultimate yardstick for me is the total quantity of rights violation. We have less intrusiveness under a user fee system, which is good. But the total amount of rights violation due to breach of contract may go up, which would be bad. (2) I wouldn’t call this a “tax-free” world. It’s free of a more-intrusive income tax, but user fees are still a tax, because we’re making rights protection — something you have a right to — conditional on willingness or ability to pay. I don’t have a well-developed theory of how much people should pay for the services they use, but for those services that protect their rights, I’m comfortable with a system where they’re paid for generally and then used without price by whoever uses them. (User fees may be attractive for services that don’t protect people’s rights, like, say, roads, universities, or bank insurance, but I take it the argument for those classes of services is a “second-best” argument that starts out with “Well, the government shouldn’t run these anyway, but if it’s going to….)

    Like

  23. Hey Sasha,

    Thanks for your excellent comments.

    I think you’re right that our two positions are very similar in how we would approach most actual situations, and the difference is largely rhetorical. As you suggest, I like to apply general principles because I think they’re very useful, both practically and rhetorically. But, I do recognize that a simple form of these principles don’t adequately handle all cases. I think the difference is that we both hide the ambiguity of our approaches in different places. Mine is hidden in the exceptions. Yours is in the practical difficulty with actually carrying out your calculation (how do we weigh rights against each other?, how long a time frame do we consider?, how do we deal with uncertainty about how people will react to our actions?, etc.)

    It seems that you don’t like to state general principles that are usually helpful and useful, but are technically false if they claim to cover all cases. I don’t like to state principles that aren’t very helpful, on their own, in most cases. I think that the way you apply your principles and the way I apply mine will yield the same results in the vast majority of cases we’re likely to encounter.

    By the way, it’s flattering that you have read my posts carefully enough to pull out examples of my rhetorical style.

    I’m glad somebody is reading them!

    Like

  24. Sasha,

    “Both of these hypotheticals have something in common: how we respond to them doesn’t only affect this case but also affects cases down the road by changing people’s incentives.”

    I agree, but I don’t think this helps. Even if we stipulate that no other rights-violations will happen as a result, I still don’t intuitively think its okay to murder one person to stop the murders of two.

    Perhaps I have a really well-designed intuition; it is telling me this is wrong because it knows that in real situations such actions will affect what happens in the future. So my intuition isn’t buying the stipulation. I don’t think that is the case; I’m not that clever πŸ™‚

    “whether it’s acceptable to have a justice system that occasionally punishes innocent people (through natural error) because that’s the price to pay for punishing the guilty”

    You may know more about this than I do, but my understanding is that in a private system of law, an erroneous conviction is not a criminal matter but a civil one. In other words, erroneously sentencing someone to death is not the same as murder (unless it was intentionally or recklessly done). So I’m not sure this example works.

    In other words, balancing murders against murders is different from balancing accidental deaths against murders. More below.

    “would it have been O.K. to shoot the airplanes down, killing everyone aboard, before they hit the World Trade Center? ”

    Yes, but not for rights-violation reasons. Why? The situation doesn’t ethically change for me if the plane was about to hit the WTC accidentally, through no fault of anyone. Does it change for you?

    Perhaps I am just getting hung up on the difference between “unintentional rights violations” like car accidents and “intentional rights violations” like vehicular manslaughter, but I think it is important.

    For example, is “reducing the number of deaths from car accidents” an okay thing to tax+regulate for, but “reducing the number of deaths from lightning strikes” not okay?

    Like

  25. Sasha,

    Gil:

    “It seems to me that voluntary activity should not be punished or restricted, in general, even if those restrictions would lead to improvements for other people. I might agree to exceptions to this rule under rare, emergency, situations;”

    Sasha:

    “I drop the absolutist part of the theory entirely and I treat every case as one where balancing is appropriate.”

    Gil:

    “I think you’re right that our two positions are very similar in how we would approach most actual situations, and the difference is largely rhetorical.”

    I think the differences are substantive, even though they agree in most situations. I don’t think that you two agree and are just saying it in different ways.

    For example, say we have a government based on Sasha’s principles. Furthermore, at this point in time, it has only enacted policies that Gil agrees with (e.g. “only in rare, emergency circumstances” type policies).

    A new policy is suggested; it will, in the long run, accounting for incentive changes and slippery slopes, lead to one hundred fewer murders, but will require the government to murder fifty innocent people. These are the only consequences that will occur, and they will certainly occur.

    If we knew all this, it seems that Sasha would like this policy and Gil would not. Am I correct?

    (I know that real situations are more complicated and uncertain than this; however, there are methods to reduce uncertain, complex, and dynamic situations to a (large) set of simple situations like the above.)

    Like

  26. Sasha,

    “because we’re making rights protection — something you have a right to — conditional on willingness or ability to pay.”

    Why do we have a right to rights protection? I have a right not to be murdered, but I don’t have the right to force someone to protect me from a murderer, do I? Or are you saying that the rights violations involved in forcing people to protect me are outweighed by the rights violations that would happen to me otherwise?

    “Everyone is entitled to make whatever contracts they like and they have a right to the other party’s performance.”

    I don’t agree; contracts are about exchanging property rights, not performance. For example, a contract between you and I might look like the following:

    “If Bill paints Sasha’s house by June 15, 2004, Sasha will pay Bill $1000 (wages). If Bill does not paint Sasha’s house by June 15, 2004, Bill will pay Sasha $2000 (penalty). Judge Gil will decide whether or not the house is painted.”

    Note that nowhere does it say “Sasha has the right to Bill’s performance i.e. Bill’s painting Sasha’s house.”

    (As an aside, this type of “complete contract” allows contract law to be completely privatized, so the idea of “user fees” also disappears).

    Many libertarian theories disallow the exchanging of personal rights; for example, if you contract to give your kidney to someone, and you decide later that you don’t want to, they can’t force you to; they have to accept a penalty from you instead (no specific performance, I think it is called). In contrast, if I owe you $100, you can force me to give it to you.

    “If everyone performed their contracts, we might not need a court system. We set up a court system to remedy a situation of rights-violation”

    I disagree here as well, unless you just mean “court system for contracts.” If I hit you with my car accidentally, and we can’t agree on what to do about it, we need a court system to decide what I owe you as damages, right? At minimum, we would need to set up a private court system for such situations, perhaps a la Friedman, and that type of system works just as well (or as badly) for tort cases as criminal cases.

    Like

  27. Bill — your last hypo is a bit too reduced-form. Hypos like shooting the WTC plane out of the sky are easier because it doesn’t look as though there’s a big slippery-slope effect. Letting the government murder 50 innocent people has obvious slippery-slope effects, and when you take them away with the remark “accounting for incentive changes and slippery slopes,” you’re only making the hypo murkier because it’s not clear how you’ve “netted out” those effects. So really, what you’re saying is something like: “Behold a new policy to reduce murders by 100. It requires an initial outlay of 10 murders, then 20 accidental killings in the future when this new power is incorrectly used, then 20 murders in the future when we get an evil government taking advantage of these powers… for a total of 50 murders.”

    Or something like that. It could be a different breakdown — but the actual story is important for me to know whether to support the policy. (Similarly, Gil says that “the right thing to do depends on a lot of information not available in our hypotheticals.”) I don’t go by pure body count; the details matter. And it’s those details that would determine both whether I would think the moral calculus favors the side of intervention and whether Gil would think this qualifies as an “emergency situation.” In fact, as we’ve discussed in these last few posts, both the moral calculus and the “emergency situation” idea are vague enough that no one has enough information to say whether Gil and I would disagree on any given issue. And moreover, even if we did disagree, it could just be a disagreement over how to value the different effects, not a disagreement on the fundamental issue of whether balancing is appropriate.

    On your previous post responding to my hypos: I’m not sure I understand the civil-vs.-criminal distinction “in a private system of law.” I understand that if someone voluntarily signed up for a private legal system, he would be accepting the risks of error, including whatever risk of being erroneously put to death exists in that system. That doesn’t answer the question of the morality of having the government set up a mandatory system of law where people are involuntarily subject to that risk. Knowing the law and the categories of civil and criminal doesn’t help us: it’s a pure moral question. And in any law-enforcement system you might intentionally set up, at some point you or your agent is going to intentionally flip the switch on a particular person, or intentionally shut the prison doors on a particular person, who might turn out to be innocent. (Unlike the typical car accident, where neither the manufacturer nor the driver intentionally targeted anyone.)

    I’ll agree with you that the WTC example wasn’t great — though for different reasons; the situation doesn’t change for me if it was accidental, but that’s because all these passengers were going to die anyway. So let’s go back to a couple of hypos from my previous post:

    1. Bombing the building where Saddam’s sons are hiding, even though an innocent grandmother, baby, and puppy are clearly visible on the same floor.

    2. [Slightly changed from the previous to alter identities:] A’s family is at the bottom of a hill and there’s a car running toward him down the hill. They can’t move out of the way, but you can push a button and set off an explosive charge on the hill which will vaporize the car and save them. Unfortunately, the evil mastermind who set the car in motion is long gone, and an innocent guy B (another victim of his) is locked in the car.

    Finally, by my reasoning above, reducing the number of deaths from lightning strikes isn’t an O.K. thing to tax and regulate for. On the merely innocent car accidents, I lean toward the view that even accidental rights violations are rights violations, so it’s acceptable to intervene there too. (The tort system is a species of government regulation, though it operates after the fact and is generally monetary. That all fits within my theory: sometimes before-the-fact regulation is the most moral, sometimes after-the-fact monetary sanctions, sometimes after-the-fact bodily punishment. It depends on things like how much innocent stuff you’ll end up preventing along with the bad stuff.)

    Like

  28. I understand the potential ambiguity in my statement that rights protection is “something you have a right to.” This does NOT mean that you can force people to protect you. Because notice that my rights theory (as outlined in the posts above) doesn’t give anyone any absolute rights: in my rights theory, no right is absolute, and all rights can be violated provided you can thereby achieve some bigger morality.

    All I mean by that statement is that rights protection is something that increases the morality of the world. The result is that I, as designer of the world (or, if you like, as moral judge of different conceivable worlds), could choose to tax people (an otherwise immoral act) and use the money to fund rights protection (a morality-increasing act).

    (Side note: when I say I’m entitled to your “performance,” that doesn’t necessarily mean the specific performance of the underlying act we contracted on, that is, house painting. If the full contract says “Either paint my house or pay me $X, and Gil decides whether the house is painted,” then performance means “either painting my house or paying me $X, depending on Gil’s decision regarding whether the house is painted.” That’s the performance I’m entitled to. For instance, if you refuse to listen to Gil, you’re violating my contractual rights. We could make a bigger contract: “Paint my house or pay me $X according to Gil’s decision, or (if you choose not to listen to Gil) pay me $Y, whether you’ve listened to Gil to be determined by Eugene….” But then too there’s something that violates that contract. So my point still stands: however you specify what the contract commands, you have a right to performance of that contract. And as I’ve explained above, this doesn’t mean you’re automatically endowed with a power to command people to secure that performance… but it does mean that, if there isn’t a better way, it can be moral overall to tax or regulate people to secure that performance.)

    Now I agree that you could entirely privatize contract enforcement. And maybe that system would work wonderfully. But, in principle (and right now I’m not making statements about what’s likely in the world, I’m just setting out a philosophical position) it’s possible that a privatized system wouldn’t maximize morality. We could make a contract saying “I’ll paint your house in exchange for $100 or else I’ll pay you $200”; we might decide not to buy any enforcement services, you pay me $100 and I refuse to do anything. My only claim is that it may be a moral course of action for you to steal someone’s money to hire someone to break open my vault and remove the $200.

    Like

  29. Sasha,

    I think that your rule (as you would persoanlly use it) will often yield the right course of action because rights are so very important.

    But, how do you respond to a liberal who says:

    “What’s so special about rights protection? Yes, it’s a good thing, but there are many other good things, too. Why would you consider violating rights in order to socialize rights protection, but not food, education, or emergency medical care? Aren’t these important things, too? Wouldn’t you steal bread to keep from starving? Why not steal a little from lots of people to keep thousands from going hungry? Or to help disaster victims, etc…”

    The liberal is just as skeptical about sufficient voluntary funding for these things as you are about defense.

    Like

  30. Well, my response is much the same as yours, I suppose, whatever that is. I take the following two-pronged approach:

    First, figure out what’s morally valuable. This separates people into different boxes: utilitarians, rights people who believe in rights theory A (e.g. libertarians), rights people who believe in rights theory B (e.g. people who believe in positive rights), people who believe in a vision of society C (e.g. Christians), people who believe in a vision of society D (e.g., socialists), etc.

    Second, figure out a politics appropriate to that vision. For us, politics is an extension of ethics, so let’s figure out an ethics appropriate to that vision. Here, libertarians who agree on what’s valuable might go for a deontological approach (rights give you direct commands to respect people’s rights) or a balancing approach (act so as to maximize rights), in the way that we’ve discussed above.

    Anyway, when the liberal asks, “Why rights?”, it’s aimed at everyone who, at step 1, has chosen libertarianism; it’s not a particular jab at me because I’m a balancer. Nor do I think I’m somehow more vulnerable because I’m a balancer: my decision to balance doesn’t stem from a wishy-washiness that believes lots of stuff is valuable so let it all in and let’s make compromises; it stems from a principled view that certain things (and only those things) are morally valuable and that what’s important is the consequential effect of our actions on those things. The non-balancing justice liberal could just as easily ask an anarcho-capitalist why he thinks rights are so important when we have these moral imperatives to expropriate the rich, and in fact, you often see dramatic-looking ideological shifts between one extreme and another.

    Now to the substantive answer to the question: My rights theory rests on a kind of deep moral intuition about what makes for human dignity. I haven’t gotten it to the point where I can ground it in first principles, maybe because I haven’t found a satisfactory set of first principles. Some libertarians ground their beliefs in a kind of (sometimes implicit) cost-benefit analysis about what makes for a prosperous society or serves “the common good” (there are strains of this in Hayek and in Objectivism); others just state a “non-aggression axiom,” which by its name is just an axiom. Anyway, my answer is that I don’t have a knock-down answer here, but I don’t think I’m any more vulnerable than any other libertarian because I do balancing.

    Like

  31. Sasha,

    “So really, what you’re saying is something like: “Behold a new policy to reduce murders by 100. It requires an initial outlay of 10 murders, … for a total of 50 murders.” ”

    That is exactly the sort of thing I was saying. Perhaps money would have been easier :-). However, I stand by the idea that if a theory can’t work in simple situations it is suspect, since my way to reason about complicated situations is to break them down into simple situations.

    For example, we experts could model the policy by specifying 10,000 possible outcomes to the policy, and assigning probabilities to them. Each outcome would look something like the above, since the uncertainty would have been modelled explicitly. Similar things can be done with complexity and dynamics. (This is how I do this for CEOs and patients who face difficult decisions.)

    “I don’t go by pure body count; the details matter.”

    But you would go for pure similar-murder count, right? Things might get hairy if “murder in order to reduce murders” was weighted differently than other murders; it starts to beg questions, no?

    “Because notice that my rights theory (as outlined in the posts above) doesn’t give anyone any absolute rights: in my rights theory, no right is absolute, and all rights can be violated provided you can thereby achieve some bigger morality.”

    Right :-). However, there are two levels of “rights” in your theory that I was confused about:

    1) the rights that _enter_ the moral calculus (e.g. any murder is a rights violation), and

    2) the rights that _exit_ the moral calculus (e.g. murders that decrease the overall number of murders are allowable).

    I was wondering if protection of rights is a (1) right or a (2) right. You said it was a (2) right. Are there different words we could use for the two types of rights?

    “A’s family is at the bottom of a hill and there’s a car running toward him down the hill. They can’t move out of the way, but you can push a button and set off an explosive charge on the hill which will vaporize the car and save them. Unfortunately, the evil mastermind who set the car in motion is long gone, and an innocent guy B (another victim of his) is locked in the car.”

    Again, I’m not sure that the situation is any different if the deaths of A’s family are accidents rather than murders. Say B is sleeping in a big log, which gets dislodged by lightning :-), and is now flying down the hill towards A’s family …

    You would still want to kill B to protect A’s family, right? Even though no murders happen if A’s family dies and B lives, while one murder happens if B dies and A’s family lives. (Now you could argue that we don’t really murder B, since it is other-defense, but we wouldn’t do so in your example either then.)

    This avoids the issues of incentives, slippery slopes, and “they would have died anyway.”

    “1. Bombing the building where Saddam’s sons are hiding, even though an innocent grandmother, baby, and puppy are clearly visible on the same floor.”

    You had to throw in the puppy, didn’t you πŸ™‚

    I don’t quite understand your example; I don’t understand how “bombing Saddam’s sons” fits into your theory. Are they going to violate rights very soon, on par with the three murders? Can I change it to the following?

    1a. A gunman has three hostage and is about to shoot you. You can kill him first, using a grenade, but you would also kill the hostages.

    Does this capture the important issues?

    Here, I think it could easily be okay to kill the three hostages to protect yourself, even though it is three murders to one.

    (An even side-er note:

    (“For instance, if you refuse to listen to Gil, you’re violating my contractual rights.”

    (Well, not the way I see it; once Gil rules, the contract has been fulfilled. I just happen to have $2000 of yours in my bank account. If I refuse to give it to you, then it’s the same as if I stole it (which is not a contract problem, but a criminal one).)

    Like

  32. “My rights theory rests on a kind of deep moral intuition about what makes for human dignity”

    Wow; I would have thought that that would lead pretty directly to a deontological view; perhaps you could fill in some of the top level details?

    Like

  33. No, the hostage situation still isn’t good — hostages bring in too many issues of incentives for gunmen. If I leave the gunman alive, he may still kill the hostages or use them to immorally extort something else. We’ll need to think up some better hypos: even in accidental cases, there are accidental rights violations, which in my book are still rights violation in a way that lightning strikes aren’t. I like the Saddam’s sons example, since they had access to the Iraqi Machinery of Death, so killing them saves lives by hastening the surrender of the Iraqi military. On the other hand, people who really distrust the U.S. military might not like that hypo because it’s still got thorny issues of bad incentives for the U.S. military. Perhaps you like historical examples because there’s not that much risk of a slippery slope anymore? A lot of World War II decisions involved killing innocents so as to get the bad guys and save more lives down the road. I’m behind that sort of thing. Like, say, bombing the watchtower of a prison camp even though you see some prisoners standing nearby when you aim the bomb. Some people claim that the Hiroshima bombing had these sorts of benefits (hastened the surrender of the Japanese and thus obviated the need for a bloody invasion of Japan), and I’m sympathetic to that, but I don’t to want to argue the details.

    The whole thing about the contract and Judge Gil is getting complicated and I’ll just let it lie unless it turns up being relevant again at some point.

    I wasn’t aware that I was using the same word for two different concepts. I like to say: “You have the right to life, liberty, etc.”, and “right” in this context is a non-absolute thing. (You might complain about this usage because “right” may not connote something non-absolute to you, but “right” is often used in a non-absolute way, especially among non-libertarians, in the legal context, etc.) Then I like to say: “It’s morally acceptable to violate the right to life, etc.”, which indicates the end-point of my balancing process.

    Finally, as to your first point about your hypo: I’m not complaining about simple cases, I’m just complaining about cases that don’t give enough information. A simple case to test the theory should be sufficient to test the theory. But if you really insist, how about this: Murdering 50 innocents to save 100 other innocents is morally acceptable to me, provided that’s the entire hypo. But in reality, a case that presents itself this way will be substantially more complicated: in particular, the 50 innocents you murder will become many more because of the powers you give to whoever’s doing the killing. So the balancing process leads to an ultimate guideline that it’s almost never acceptable to murder innocents to save other innocents, except (a la Gil) in extreme cases — where the number of people you save is much, much, greater than the number you kill, or where the situation is unique enough that the precedential effects for the future are small — where the answer may be different. But I still say that because the precedential effects are so important in this sort of hypo, the way I answer the very stylized hypo doesn’t do much to test our intuitions.

    Like

  34. “[an] answer the very stylized hypo doesn’t do much to test our intuitions.”

    I guess. However, answers to complex questions are often counter-intuitive. So I can’t learn from simple cases, and I can’t learn from complicated ones either. Since I want to learn, you see my problem πŸ™‚

    Like

  35. “Murdering 50 innocents to save 100 other innocents is morally acceptable to me, provided that’s the entire hypo.”

    Part of my course in ethical decision-making covers the ever important topic “How do you get good people to do horrible things?” πŸ™‚

    One common way is to get them thinking, “If I didn’t do this horrible thing, then someone else will do worse things.” You can usually get even more force behind it: “I am required to do this horrible thing, to prevent someone else from doing worse things.”

    These both fit directly into your theory, it seems, so people who follow your theory might be doing horrible things on a regular basis, and yet be acting morally. This disturbs me greatly.

    Similarly, your theory makes me ethically responsible for other people’s actions. In other words, if I don’t act in this horrible way, I am ethically responsible for the horrible things others might do as a consequence. This also troubles me.

    Like

  36. These are two things which disturb you but don’t disturb me so much.

    First, people who follow my theory “might be doing horrible things on a regular basis, and yet be acting morally.” Of course, calling them “horrible things” gives the game away. Killing an innocent is something which otherwise would be immoral, but which becomes acceptable because it’s the price for a greater moral good. It’s not a horrible thing unless you look at what, if anything, it’s bundled with.

    Now I know what you might mean: doing these things which, if done by themselves, would be horrible, desensitizes you to horrible things, and then you or others will be more willing to do horrible things in the future when the justification is gone. If that’s what you mean, I agree, and that’s entirely covered by my requirement that you consider these indirect future effects. That might be enough for a rule to kill innocents very, very rarely, and only when the moral gains are overwhelming. It could justify teaching your kids and others that they shouldn’t ever kill innocents — on the theory that people want to kill anyway and will do so if the need is great enough, so if your teaching bends over backwards in the right direction, you may get as close as possible to the ideal.

    But… a principled position that you should never ever, ever, violate anyone’s rights… I find *that* troubling, because my intuition tells me that it’s O.K. to have killed or taxed or regulated someone somewhere to have prevented or stopped World War II or the Holocaust.

    On to the second point: does moral consequentialism make you “ethically responsible for other people’s actions”? I suppose it does, though I wouldn’t necessarily use the language of “ethical responsibility,” which is more appropriate to a deontological worldview where rights translate directly into individual duties. But let me put it this way. Suppose you’re faced with a situation where you could kill an innocent guy to stop Hitler. If you kill the innocent guy, (let’s assume) you’re maximize the total morality of the world, i.e., you’re minimizing rights violation, maximizing rights protection, however you feel like phrasing it. If you do so, you can be “held responsible” in the sense that we can ask whether you did the right thing, and you get exonerated on the merits, because you increased world morality. If you don’t, then you just failed to take a step that would increase world morality.

    Now, that leads to the question: is there an individual duty to increase world morality? You could be a consequentialist and have a mixed view: it’s always allowable to increase world morality (so you *may* kill the innocent guy to stop Hitler), but never required (so you don’t have to kill him). Then, even though you didn’t kill the guy and you didn’t increase world morality, you didn’t violate your duty.

    Myself, I prefer the simpler view: I don’t know what a duty is, other than a statement of what I should do if I want to “do the right thing.” So to me, saying that “killing this guy will increase world morality” means that I *must* kill the guy *if* I want to bring about a more moral world. I think one *should* want to bring about a more moral world, so yes, you do have a duty to kill the guy (assuming you’re the only one who can do it under the circumstances). I suppose I could see how you could take the mixed view, but I’m unclear on why one shouldn’t feel a duty to bring about a more moral world.

    Like

  37. “Of course, calling them “horrible things” gives the game away. Killing an innocent is something which otherwise would be immoral, but which becomes acceptable because it’s the price for a greater moral good.”

    I am not equating “horrible” with “immoral.” Torturing children would be horrible, even though I can think of times when it would be moral, perhaps even required.

    “doing these things which, if done by themselves, would be horrible, desensitizes you to horrible things”

    That isn’t what I am saying; the desensitization is important but separate from the idea that I am actually doing a horrible thing.

    “a principled position that you should never ever, ever, violate anyone’s rights”

    Most deontological theories have “catastrophe clauses” in them. In other words, they carefully define situations where the categorical prescriptions break down, and what rules apply in those situations.

    So, it isn’t ethical to torture someone, unless it prevents, say, a 90% chance of 90% of the people in the US dying from a biological weapon. This doesn’t imply that there is always a balancing act going on; the realm of catastrophe is simply different from normal.

    So, preventing WWII or the Holocaust or a meteor strike could trigger a catastrophe clause. I’m not sure about the WTC, though (shooting down the plane has other deontological justifications, so think about torturing a child to get their parent to tell you the terrorist’s plans so you can stop them). And it leaves open the question of “Isn’t world hunger/global warming/etc. pretty catastrophic?”

    I agree, there are several difficulties with deontological positions as well. Here are some that come to mind.

    1) It is unethical to steal bread to feed yourself,

    2) It is unethical to kill one person to save many others, even if the one person will die anyway,

    3) It is unethical to lie to someone, even if that lie will do a lot of good.

    4) If you think murder is so wrong, why do you seem to prefer a world with more murders than fewer?

    Perhaps you can come up with more.

    Note, I am simply trying to figure out the implications of your theory, so I can compare them with the implications of others.

    Being required to do horrible things and being responsible for the actions of others are what drove me away from preference-consequentialism, and it seems to be driving me away from rights-violation-consequentialism as well.

    “I suppose I could see how you could take the mixed view, but I’m unclear on why one shouldn’t feel a duty to bring about a more moral world.”

    One common issue is that it would take over your life if you actually said you had a positive duty to make the world more moral. There isn’t a limit on the amount of effort you should spend on this duty.

    This is one reason for the mixed view; it allows the maximizing of morality, but doesn’t take over your life.

    “If you do so, you can be “held responsible” in the sense that we can ask whether you did the right thing, and you get exonerated on the merits, because you increased world morality. If you don’t, then you just failed to take a step that would increase world morality.”

    Say I decide not to kill the innocent. My point is that, in figuring out whether or not I did the right thing, you are in some sense attributing to me all of Hitler’s murders. This troubles me greatly (but as you say, “These are two things which disturb you but don’t disturb me so much”).

    This just occurred to me: might your legal system punish me because I didn’t kill the innocent person and thus prevent Hitler from murdering millions? This would be a clear cut case of being held responsible for other people’s actions.

    I have a feeling you will say no, but I’m not sure; if I refuse to pay taxes to pay for defense you will throw me in jail, right? Would I be in jail because I didn’t act to maximize morality?

    Like

  38. Gee Sasha,

    Your view seems more deontological with every post. You have a rights-based ethical theory that leads to duties (“I’m unclear on why one shouldn’t feel a duty to bring about a more moral world.”). Maybe the path is not direct enough for you to want to call yourself a deontological libertarian, but it seems direct enough to me.

    Welcome aboard! πŸ™‚

    Now, perhaps my definition is wrong, and being a deontological libertarian means one doesn’t recognize any exceptions to avoiding rights-violating. But, if that’s the definition, I don’t think that there are very many true deontological libertarians (perhaps none).

    I suspect that those who sound like absolutists really (when pressed) recognize that trade-offs would be right sometimes, but they’d rather not give the power to make those calls to people like Ted Kennedy and Janet Reno, or the aggregation of voters; because that would probably cause much more harm than good.

    Like

  39. Bill,

    I know it was just an example, but my deontological theory does not include a prohibition against lying. I don’t think I owe everybody the truth (but sometimes I owe certain people the truth). Sometimes, I think I have a duty to lie convincingly (e.g. I know where Jews are hidden and Nazis ask me about it).

    The stealing bread example is interesting, too.

    I was wondering if when Sasha performs his calculation he permits himself to weigh certain people’s rights-violations differently from others. Does the calculation (and thus the duty) change if the murder victims are his family vs. a random family, or Americans vs. foreigners?

    Like

  40. Bill — of course nothing I say implies that the legal system should hold you responsible for your inactions or anything like that. I’m speaking on the pure moral level. For me to take the extra step and say that the government should punish me along those lines, I first have to conclude that the government can be trusted with that power, etc., etc. So it’s the same balancing process going on. (There’s another little twist: if the government forces you to “do the right thing,” then that forcing is itself a violation of your liberty rights. So even if the government were perfectly benevolent and omniscient, I would still insist that the government only force you where the improvement would be substantial.)

    Gil — let me make clear what I mean by deontological. Not everything that involves a duty is deontological: clearly, even a hardcore balancing guy can feel that there is a “right thing” and believe that everyone should do the right thing. By deontological libertarianism, I just mean the view that derives a duty directly from the right. One of the attributes of deontology is a separation between “the good” and “the right.” For instance, we can agree that stopping Hitler leads to a better world, that is, a world where fewer rights are violated (that’s “the good”), but the deontologist, say a moral anarcho-capitalist, says you still shouldn’t tax someone to fight that war (it’s not “right”), even when that’s the only way to defeat Hitler. My view eliminates any notion of “right” separate from “the good”: we do have moral duties, but they’re entirely derivative, so anything that decreases morality isn’t moral.

    Now I agree with you: perhaps there aren’t very many true deontological libertarians. This has been my point all along: perhaps we all, except the hardest core moral anarcho-capitalists, are balancers. If that’s the case, I think we should talk about the balancing we do and make our rhetoric match our beliefs, provided we’re seriously talking about moral philosophy. (In everyday discourse, where we’re trying to make an impression on people who aren’t engaging our beliefs on a terribly deep level, I think it’s acceptable to adopt sound-bite moral philosophy and simplify your views.)

    In response to your last point, and here’s a bit of pop psychology from me: perhaps libertarians are afraid of admitting that one could imagine certain government interventions that would improve morality, because then it seems like too quick a step to actually give that power to actual politicians. I’ve come around to the view, though, that, in the interests of transparency, we should admit that such hypothetical interventions exist (if they do), and strenuously push the point that in the real world, given government incentives, these interventions are no longer morality-maximizing. As I said above, sound bites are another matter, and perhaps moral balancing even demands that we engage in the noble lie and convince people to be deontologists!, but the rhetoric we use among ourselves should better reflect our actual beliefs.

    Finally, I guess I should answer the last question: should some people count differently than others. In the abstract, I think people should count the same. In the implementation, things get more complicated. First, I, as a self-interested actor (and not the detached moral philosopher) would, in my weakness, count myself and my friends and loved ones more than others. So sue me. It’s not right, but who can really expect me to act otherwise, and we should design our institutions to take this sort of behavior into account. Second, certain people’s rights violations may have different effects in the world. For instance, suppose the assassination of the President would lead to world instability, while my murder wouldn’t; so it’s O.K. to tax people more heavily to fund the Secret Service to prevent the President’s assassination than we tax people to fund the police to prevent my murder. That’s not because the President’s rights are more important, but because of the stuff (other rights violations that follow as a consequence) that’s bundled together with the President’s assassination. Similarly, some of the post-9/11 Constitutional issues relate to how much due process we should give suspected terrorists. Everyone pretty much agrees we don’t owe Nazis we’re shooting at in Germany any due process; what about Nazi saboteurs captured in the U.S. (the 1943 Ex parte Quirin case); Nazi-sympathizing American citizens captured in Germany; Nazi-sympathizing American citizen saboteurs captured in the U.S.? Some people have suggested (and this is plausible, though I’m not saying I necessarily agree) that we need to draw the line somewhere, and that line should be drawn in favor of U.S. citizens, because it’s a more tenable Constitutional line and may more effectively guard against us degenerating into a police state. If that’s so, we might weigh American citizens more, but again not because they’re more valuable but because the real-world effects differ.

    Like

  41. Oh yes, I forgot to mention. As Bill suggested above, there’s more than just rights-deontologists and rights-balancers. If someone rejects rights-deontologism on the grounds that surely it’s O.K. to torture someone if there’s a 90% chance that 90% of the U.S. population will die in an hour, then there are other places for him to go than outright rights-balancing.

    Bill suggests “catastrophe clauses,” which of course should be framed in terms of some different good (other than rights), so you get a deontology which just isn’t pure rights-based deontology. Which of course deontologism is all about: it’s a set of duties with rules about when certain duties trump other duties. So Gil, when you say there are very few, possibly none, true rights-based deontologists, maybe they’re one of those “rights plus catastrophe” guys, still deontologists after all.

    I don’t have a problem with that in principle; my problem is just that in practice I have trouble locating the line between catastrophe and non-catastrophe. Whatever you define as a “catastrophe,” most moral dilemmas involve some possibility of some catastrophe happening. So I prefer to think of it as a sliding scale, which is probably the essence of the distinction between deontology and balancing. Of course I move the ambiguity from defining the catastrophe locating the threshold for action on my sliding scale. So neither approach is necessarily more ambiguous. But I prefer keeping the categories clear in the setup and letting the ambiguities come in in the implementation. Though I like my moral philosophy stated less absolute than you do, I do still like it stated clearly, just so I can actually state it and be able to tell roughly whether I kind of agree with it.

    Like

  42. Whenever I watch 24 it makes me think about this stuff. The show has instances of torturing terrorists to find a nuclear weapon that will go off shortly, killing a citizen (after warning him) to prevent his inadvertently spreading a deadly virus, killing an agent (with his reluctant agreement; but it’s unclear if that changed things) to appease a terrorist threatening to release that virus on major populations…

    I think that these things are right, but I don’t really trust governments with this kind of power. I’m afraid of setting these things as official policy because I’m sure that they’ll be abused (unlike Alan Dershowitz who thinks there should be some kind of torture warrant, I think).

    My tentative position is that these things should be illegal, but I hope a jury would nullify when the action was appropriate. I’m not sure.

    I suppose I’m one of those ‘rights plus catastrophe’ guys.

    Oh, and even in the case of “catastrophe” I think victims should be compensated, if possible, to the extent that they would be indifferent between having the rights-violation + compensation and no rights-violation.

    If somebody could convince me that there is a political process that would do more good than harm with the power to violate rights, perhaps I might change my mind. But, for now, I think it’s best to set things up so that this is just forbidden.

    So, maybe I’m a Sasha-type balancer, who has just decided that there’s no way to institutionalize the proper balancing of rights-violation that would increase overall morality.

    I do think that talking about rights as if they are absolute is a more useful way to discuss these things than always indicating that these things can be balanced. The balancing issue is probably just going to lead to unhelpful tangents.

    Like

  43. Sasha,

    “Bill — of course nothing I say implies that the legal system should hold you responsible for your inactions or anything like that.”

    (snip)

    “… For me to take the extra step and say that the government should punish me along those lines, I first have to conclude that the government can be trusted with that power, etc., etc.”

    But wait; you just said that nothing you said would imply that you would punish one person for another’s actions. But you then say that if the gov’t can be so trusted in particular circumstances, then you would punish one person for another’s actions, in order to bring about the greatest good.

    “my problem is just that in practice I have trouble locating the line between catastrophe and non-catastrophe.”

    All of our theories have practical problems; none of us has actually articulated the exact rules or the exact values to optimize over. I don’t think that the line between catastrophe and non-catastrophe is any less distinct that the moral calculus you have suggested; am I wrong?

    Let’s think about another line: the trivial. Deontologists may think we should never throw trash in our neighbor’s yard, but that doesn’t mean they think we can’t breathe out CO2. In the realm of the trivial, anything goes.

    So we could have three levels; each with its own rules:

    1) The trivial: anything goes.

    2) The normal: categorical prohibitions, some prohibitions based on good vs. bad.

    3) The catastrophic: balancing of good against bad

    The fact that we balance in catastrophic situations doesn’t mean that we balance in normal situations, any more than the fact that anything goes in trivial situations means that anything goes in normal situations.

    Your theory can deal with all of these situations, and in most cases the two match exactly, but is leading to quite non-intuitive results in other cases. That is what is bothering me about your theory. Perhaps you could show me more non-intuitive results of a deontological view like the above.

    As another point, I’m sure your theory has a catastrophe clause as well. A meteor is scheduled to impact the earth in 1 year (no rights violations), and the only way to stop it involves trampling rights. Is it right to do so? Deontologist might say yes, but your theory says no.

    “Whatever you define as a “catastrophe,” most moral dilemmas involve some possibility of some catastrophe happening.”

    This seems beside the point.

    First, deontologists aren’t mainly interested in accidental byproducts of their decisions, but intentional ones (“law of double effect,” maybe?). So the fact that an action might possibly cause a catastrophe isn’t mean the action is categorically prohibited, unless the catastrophe was intentional (with a long story about what intentional means and doesn’t mean πŸ™‚ It might be prohibited because the harm outweighs the good, but it wouldn’t be categorically prohibited.

    Second, a small chance of a catastrophe is not a catastrophe itself. For example, how much would you have to pay me to take a one-in-a-million chance of death (AKA one micromort)? About $15 for me. So inflicting a small chance of death on me is quite different from murdering me.

    One of my professors used to work for the nuclear power industry. They used a concept called a “nanomelt” which was a one in a billion chance of a meltdown. Again, one nanomelt was not a catastrophe, even though a meltdown might be.

    “As I said above, sound bites are another matter, and perhaps moral balancing even demands that we engage in the noble lie and convince people to be deontologists!, ”

    This kind of wholesale deception also disturbs me, about all forms of consequentialism.

    Gil,

    “Whenever I watch 24 it makes me think about this stuff. The show has instances of torturing terrorists to find a nuclear weapon that will go off shortly,”

    Very difficult to justify deontologically, especially if you aren’t torturing the guilty person, but instead an innocent person he cares about.

    I definitely wouldn’t do that unless the sky were falling (as it always is on 24 :-). I doubt that torturing an innocent would be okay even to prevent 3,000 murders.

    Torturing the guilty, not sure; they’ve given up most of their rights.

    “killing a citizen (after warning him) to prevent his inadvertently spreading a deadly virus,”

    This is fairly standard other-defense; you don’t need a catastrophe involved.

    For example, say, through no fault of mine, I am falling out a window down towards you, and the only way to save yourself is to push me out of the way and to my death. That is ethical, even though you are killing me (an innocent). Strangely, you are defending yourself against an innocent person.

    This doesn’t mean you can kill someone to harvest their organs to save your life; in that case there is no “defense” going on.

    “killing an agent (with his reluctant agreement; but it’s unclear if that changed things) to appease a terrorist threatening to release that virus on major populations…”

    That was interesting, but let me down a little; it was clear that the agent was willing to give up his life to save others, but he couldn’t bring himself to pull the trigger. I didn’t see a real moral dilemma there, just a test of a person’s courage to do the right thing (the agent failed, Jack didn’t).

    Like

  44. Bill,

    “it was clear that the agent was willing to give up his life to save others, but he couldn’t bring himself to pull the trigger. I didn’t see a real moral dilemma there, just a test of a person’s courage to do the right thing (the agent failed, Jack didn’t).”

    Right, they removed the dilemma (other than it being hard to pull the trigger). I don’t remember all of the nuances of the conversations, but it seemed like the President decided to comply and told Jack to carry it out. I don’t think the decision depended on the agent’s agreement.

    Like

  45. Sadly, I now have to bow out of this web-based discussion. My upcoming clerkship with Judge Kozinski comes with a no-blogging rule (this is why I’m now off the Volokh Conspiracy too), and I interpret this to include the comments sections of other people’s blogs.

    However, if you guys feel there’s something left to say, I’d be glad to continue this discussion in e-mail with just the three of us.

    Like

Comments:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s