According to some philosophers, a feature that matters for assessing inequalities is how the inequality comes about. One theory that assesses inequalities in this way is presented by Ronald Dworkin in a position called “luck egalitarianism.” According to the luck egalitarian, a factor that matters when assessing inequalities in a given situation –and whether these inequalities are just or not– is whether they came about through calculated choice or not (Dworkin, 1981, p. 293). To clarify the view, if an inequality comes about as a result of an agents’ calculated choices, then according to the luck egalitarian, the inequality is just. The individual has presumably anticipated the consequences of her actions, and when those actions lead to her being worse off than others, then the inequality is fair. If a state of affairs comes about as a result of calculated choices, then this state of affairs is option luck, and is presumably fair. If the inequality comes about as a result of so called “brute luck,” then presumably the inequality is unfair. It is noteworthy that this view takes into account how a particular state of affairs came about, and does not simply assess a situation based on the “end state.” Presumably “brute luck” refers to a situation where an inequality comes about as a result of some relevant feature beyond a persons’ control (Dworkin, 1981, p. 293). For instance, impairment and certain social circumstances are cases of brute luck. Moreover, if an inequality comes about as a result of brute luck, then according to the luck egalitarian, we have reason to neutralize the bad effects through compensation (Hirose, 2015, p. 45). This view provides an ethical basis for addressing certain inequalities like a wheelchair users lack of access to buildings, or fixing particularly brutal neurological disorders. Indeed, we often justify our moral actions when neutralizing bad effects by making appeals to whether the state of affairs came about through choice or not. When we think that features like brute luck or option luck are morally relevant in a given situation, I will call these luck egalitarian considerations. According to the luck egalitarian theory, when assessing any case of inequality, egalitarian considerations are applied to the case to assess whether the inequality is fair or not. Egalitarian considerations are principled reasons for acting a particular way in the given situation. If the situation features a relevant instance of brute luck, then presumably there is reason to neutralize the bad effects and eliminate the inequality.
Many individuals think luck egalitarian considerations are often sound for rationing certain goods. Consider a case where I only have enough juice to give to either my niece or nephew. My nephew was bad, because he stole some cookies from the cookie jar. This was his choice, and he rationally chose to take the cookies. My niece, on the other hand, was good. It seems a lot of people would give juice to my niece instead of my nephew. The inequality in juice was just, according the luck egalitarian. Consider another case where individuals can only give health care to one individual: a good person, who helps the poor and the weak, or an evil individual who calculatedly murdered a number of people. Many individuals would choose to give the health care to the good person, if we could so choose. But there are a number of objections to luck egalitarianism. Two of the most convincing are the harshness objection and the moralistic objection.
According to this objection, luck egalitarians have a harsh verdict about appropriate actions in certain situations. Consider a case presented by Anderson:
Consider an uninsured driver who negligently makes an illegal turn that causes an accident with another car. Witnesses call the police, reporting who is at fault, the police transmit this information to emergency medical technicians. When they arrive at the scene and find that the driver at fault is uninsured, they leave him to die by the side of the road. (Anderson, 1999, p. 295)
The luck egalitarian, so the objection goes, must claim that the resulting inequality between the reckless driver and the other driver is not bad or unjust. Therefore, the luck egalitarian has no problem with leaving the reckless driver unaided. So, if luck egalitarianism is true –or if luck egalitarian considerations are sound–, it is morally permissible to leave the reckless driver unaided. But there is a problem with leaving the reckless driver unaided. We should help him, regardless if he was reckless or not. So, luck egalitarianism –and luck egalitarian considerations– is false.
Now, as Iwao Hirose points out, the value pluralist has a solution to this objection. The pluralist solution is to appeal to something else (another principle) in order to require the rescue of the reckless driver (Hirose, 2015, p. 59). This line of defence is presented by both G.A. Cohen and Shlomi Segall (Cohen, 2006; Segall, 2009). According Cohen and Segall, pluralism saves the day for the luck egalitarian. According to the luck egalitarian pluralist, considerations about option luck and brute luck do not constitute a comprehensive theory of justice, but are only a part of a general theory, which includes other principles of distributive justice. These additional principles allow the luck egalitarian pluralist to help the reckless driver in certain situations. Pluralism, as Hirose plausibly argues, suggests that two or more principles exist simultaneously (Hirose, 2015, p. 60). A few problems for the pluralist remain, according to Hirose.
According to Hirose, the response is not really directed to the abandonment objection (Hirose, 2015, p. 60). Many pluralists agree that if one has luck egalitarian considerations, then they abandon the reckless driver. As Hirose states, “They are free to support as many principles as they wish. But the bottom line is that luck egalitarianism abandons the reckless driver.” (Hirose, 2015, p. 60) The problem, according to Hirose, is that this intuition seems just plain wrong.
The second reason the pluralist response is unsatisfactory, according to Hirose, is that “even if the advocate of luck egalitarianism is confined to the domain of distributive justice, luck egalitarian considerations cannot provide a satisfactory treatment of the most basic cases, such as the reckless driver case.” (Hirose, 2015, p. 60) After all, the luck egalitarian theorist cannot provide a reason to save the reckless driver. The general dissatisfaction with the pluralist approach, for Hirose, is that luck egalitarian pluralist principles are not comprehensive distribution principles at all.
The harshness objection, for many, is a disturbing objection to luck egalitarian considerations, but what about the moralism objection? According to this line of objection, luck egalitarianism results in the unattractive consequence of adopting a sort of moralism as a basis for distributing basic goods like health care. Consider the following case presented by Greg Bognar and Iwao Hirose in “The Ethics of Health Care Rationing”:
Imagine that you are feeling well, and you worry that you might have a severe illness. So you go to your doctor, and your worry is confirmed. You do have a serious condition. But before your doctor discusses treatment options, he takes out a long questionnaire and starts asking you a series of questions about your lifestyle: Do you, or have you ever smoked? How often do you exercise? What sort of diet do you eat? How many sexual partners have you had recently, and how often do you have unsafe sex? (Bognar & Hirose, 2014, p. 134)
Now, such a series of questions seems overly invasive, and disrespectful, but suppose that the doctor is asking these questions to determine whether or not you will get treatment. Thinking about distribution of health care in this way seems to be justified on luck egalitarian grounds. After all, if the condition was a result of your own calculated choice and was somehow your own fault, then it is possible for you not to be offered certain treatment. The problem is that if egalitarianism is true and luck egalitarian considerations are used to govern distribution of health care, then it is entirely possible to see health care rationing take this moralistic form. Yet any principle for governing the distribution of health care should not allow health care rationing to take this form. Thus, egalitarianism is false.
Luck egalitarianism, as a principled view, appears to suffer from a number of theoretical problems. Yet, in some cases, many want to save luck egalitarian considerations. Consider the juice case we initially entertained. Or consider another modified reckless driver case. Consider a case where you are an ambulance driver, on your way to help a moral saint who has recently been in a car accident. The person has insurance, was not intoxicated, and is generally a careful driver. He was simply in the wrong place at the wrong time. On your way there, you encounter a man who has been in a different car accident. He was under the influence, is a known risky driver, doesn’t have insurance, and knew exactly the risk he puts himself in by doing so. You can help only one person. You decide to help the person you were originally planning to help, on the basis of luck egalitarian considerations.
Are luck egalitarian considerations really wrong in this situation? Many would say they aren’t wrong. Moreover, suppose one were passing a responsible driver instead of a reckless one, and there are no other drivers to consider saving. Does not the fact that the driver was responsible provide an additional reason to save him? Many would say that it does. So, luck egalitarian considerations seem to be relevant in particular cases, but luck egalitarianism seems to face a number of formidable objections. How might one save luck egalitarian considerations? Is it a failed view? Should we really abandon luck egalitarian principles when rationing health care? What’s your take?
You can follow Ray on twitter here.
Works Cited
Anderson, E. (1999). What is the Point of Equality. Ethics, 109, 287–337.
Bognar, G., & Hirose, I. (2014). The Ethics of Health Care Rationing. New York: Routledge.
Cohen, G. A. (2006). Luck and Equality: A Reply to Hurley. Philosophy and Phenomenological Research, 72, 439–46.
Dworkin, R. (1981). What is Equality? Part 2: Equality of Resources. Philosophy of Public Affairs, 10, 283–345.
Hirose, I. (2015). Egalitarianism. New York: Routledge.
Segall, S. (2009). Health, Luck, and Justice. Princeton: Princeton University Press.
Justin Caouette
January 13, 2015
Nice post, Ray.
LikeLike
angramainyu2014
January 13, 2015
Hi, Ray,
Interesting article. Thanks for posting.
Briefly, I’d raise the following challenge, which one may call the “reverse harshness” objection (to give it a name). It mirrors the car accident scenario, so it’s similarly vulnerable to the pluralist reply. But on the other hand, if the pluralist reply fails, it seems there is no room for a reply (from the luck egalitarian) akin to abandoning the reckless driver:
S1: Two gunmen are shooting and murdering people in a building. Bob hides – understandably -, but gets shot in the stomach anyway. Many other people who hid, were not shot. The inequality is deemed unfair by the theory because it was not the result of calculated choices. Bob gets assistance.
S2: Like S1, but Tom only hides at first, waiting for a chance. He could remain in hiding as many others understandably do, but in order to save the lives of innocent people, he plans to wait until a gunman is close to him, then attack the gunman, grab one of his guns, and shoot them both. He knows he will likely be shot too, but he plans to do that in order to save people who are being massacred.
So, he does as planned, and manages to kill both gunmen, but gets shot in the stomach while doing it. Other people hid and were not shot. The inequality is deemed fair by the theory. Tom gets no help, and bleeds to death.
LikeLike
Ray Aldred
January 13, 2015
Hi angramainyu,
Thanks for the comments! I think many objectors to luck egalitarian views would share your intuitions about S2. They might argue that it would be wrong to abandon Tom in S2, yet the luck egalitarian would recommend abandoning him. Or, at the very least, the luck egalitarian sees nothing wrong with abandoning Tom in S2 on luck egalitarian grounds. The pluralist, as you have correctly intuited, would argue that there are other non-egalitarian considerations that recommend we save Tom, after all. There are two things that I thought of when I considered your cases, if you would indulge me.
First, many people have the intuition that S1 is a clear case of brute luck. A luck egalitarian might ask us to consider that the situation and Bob’s injuries were beyond his control, when we consider aiding Bob with his injuries. They also might suggest that the fact that Bob’s injuries were beyond his control is a relevant reason to provide aid to Bob. I’d imagine that many of us would point to these facts when trying to illicit a moral response to the case as well. So, it seems the luck egalitarian gets things right in S1. So, the real challenge, I think, is with the S2 case.
The second thing I thought of when considering S2 was whether the luck egalitarian considers this a clear case of option luck. Yes, there was some decision making going on, but Tom’s injuries, one might argue, were beyond his control. The injuries, after all, were a result of a gunman, which Tom had no control over. While Tom considered his risks when saving the individuals, we might also suggest the situation also contained factors beyond Tom’s control. More importantly, the bad effects of the situation that need to be neutralized (the gun wound) was not in Tom’s control. Moreover, the luck egalitarian might appeal to these intuitions when judging that we save Tom. Moreover, some would appeal to luck egalitarian considerations when issuing punishments (or even restrictions of their health care) toward the gunmen.
But this brings up another issue: what features in each case are relevant when assessing a case of brute luck and option luck? This question was slightly beyond what I wanted to consider in this post, but I think it’s a good question to ask when considering some cases –like the ones you present, in particular. Indeed, some luck egalitarians might think different features are relevant in different cases, some might argue that very few inequalities are option luck, and others might suggest that all inequalities are cases of brute luck. All that to say, some luck egalitarians might have some wiggle room when considering S1 and S2, but it’s worth noting that the question of what exactly counts as brute luck and option luck still remains an open open one.
LikeLike
angramainyu2014
January 14, 2015
Ray, thanks for the reply.
I agree that S1 is not a problem for the luck egalitarian; sorry if that wasn’t clear.
With respect to the fact that some factors are beyond Tom’s control (namely, the gunmen), that is true, but it seems to me there are always factors beyond a person’s control.
For example, in the case of a reckless driver, there were factors beyond his control too, such as the other driver (who might have made a turn earlier, avoiding a collision, as far as the reckless driver could have known), or the specific results of the collision, which the reckless driver cannot control (as far as the reckless driver could have known, he might have sustained lesser injuries if the out-of-control car had reacted just somewhat differently).
So, if lack of full control helps Tom in the shooting scenario, would lack of full control not help the reckless driver in the original scenario?
But more generally, if there is never full control and full control is required for applicability of the luck egalitarian’s rule, that would make rule egalitarianism inapplicable in all cases.
Granted, the luck egalitarian might say that some features of the situation count and some features of the situation do not count when it comes to control, so – allegedly – the rule might still be applicable – this is the issue you bring up.
However, I don’t think that the reply rescues rule egalitarianism, for the following reason: as long as the luck egalitarian does not specify which features are those, and/or gives a procedure for identifying them, luck egalitarianism fails to provide any applicable rules – since there is no way to decide based on luck egalitarianism -, and also fails to be testable – since a luck egalitarian might avoid any result that is in conflict with clear moral intuitions by appealing to the existence of relevant vs. irrelevant features, while failing to provide a method for telling them apart.
LikeLike
Ray Aldred
January 21, 2015
Thanks for your reply to my reply. I think you and I are largely in agreement here, so I really don’t have much to add! I still think the luck egalitarian might still reply with some pluralist line of thought, and perhaps somehow reiterate the distinction between a reckless driver case, and the gunman scenario, but your worries are equally convincing.
However, I would add that your final thoughts of the matter don’t exactly rule out luck egalitarianism. Just because the luck egalitarian has some work to do when specifying which features allow us to identify brute luck and option luck, it does not mean that their view is false. All they need is for there to be instances of brute luck and option luck, and these considerations need to be relevant when assessing scenarios. Sure, there are problem cases, but the luck egalitarian can reiterate that there are also cases where luck egalitarian considerations appear sound and convincing. I think that’s about all I have to add to your reply. Thanks again for the interesting thoughts and scenarios. I enjoyed reading and thinking about them.
LikeLike
Lage
January 13, 2015
“Are luck egalitarian considerations really wrong in this situation? Many would say they aren’t wrong. Moreover, suppose one were passing a responsible driver instead of a reckless one, and there are no other drivers to consider saving. Does not the fact that the driver was responsible provide an additional reason to save him? Many would say that it does. So, luck egalitarian considerations seem to be relevant in particular cases, but luck egalitarianism seems to face a number of formidable objections. How might one save luck egalitarian considerations? Is it a failed view? Should we really abandon luck egalitarian principles when rationing health care? What’s your take?”
I would argue that luck egalitarian considerations are not sound and certainly don’t encompass any rational system of morality within them, so overall I would say that luck egalitarianism is a failed view.
“If the inequality comes about as a result of so called “brute luck,” then presumably the inequality is unfair. It is noteworthy that this view takes into account how a particular state of affairs came about, and does not simply assess a situation based on the “end state.” Presumably “brute luck” refers to a situation where an inequality comes about as a result of some relevant feature beyond a persons’ control (Dworkin, 1981, p. 293).”
To relate this to one of your most recent posts, dare I say that in a world with no free will, everything is defined as “brute luck”, and thus all inequalities are unfair.
To add, if we actually have to make a judgement call to ration health care, then I would adopt a consequentialist approach (rather than a luck egalitarian approach per se) and assess which patients are more likely to produce greater well being for others and society. This circumvents looking at their lifestyles (say, with a questionnaire), which is presumably is out of their control, and rather looks at people pragmatically as links in a causal chain that will lead to different consequences for all conscious creatures on the planet. Those that we can assess will produce the greatest effects on the well being of conscious creatures (with the knowledge that we have), will thus benefit the world the most. This is a morality based on pragmatism and is grounded in promoting the well being of conscious creatures, which seems to be a moral axiom that rational, non-dogmatic people can all agree upon.
LikeLike
Ray Aldred
January 21, 2015
Hi Vice,
Thanks for the interesting comments. If you will allow me, the following are my responses to your reply.
1. I think there are many who would agree with you and argue that luck egalitarian considerations are not sound, but other philosophers would assert that we actually appeal to them in our everyday moral practices. When we appeal to moral intuitions, and try to get people to help others, often we do say, “Well, this person can’t help that they were sick,” and that fact seems to be appealed to as if it were a contributory reason to help them. So, in response, the luck egalitarian might insist that their intuitions are sound, but might bite the bullet when confronted with harshness scenarios or appeal to pluralism. So there are philosophers who agree with you, and some don’t. But what’s really interesting to me, is why you don’t think they are sound, why they don’t constitute a rational system of morality, what constitutes a rational system of morality, and why a system of morality needs to be rational in your sense.
2. Indeed, some philosophers suggest that every inequality is an instance of “brute luck” and therefore any inequality needs to be ameliorated. Also, while a world might not contain “free will” –whatever that means– surely many would want the space to say that there are those who have control over their situations and those that don’t. Indeed, along these lines, aren’t there conditions where agent’s control is severely limited or compromised than they otherwise would be? Would agency even exist, in any clear sense of the word, if we simply think free will doesn’t exist?
3. Your consequentialist approach is interesting, but some would argue that it is similarly vulnerable to moralistic objections, only it would result in persons being considered for treatment on the basis of a consequentialist morality. So, a person who generally tries to enhance their own well being would not get priority over someone who has dedicated their lives to help others. A rich person who donates several thousand dollars to charities would gain priority over a poor person. In the end, adopting consequentialist principles when rationing healthcare, in the way you describe, seems to be just as moralistic.
Moreover, some would suggest that it would provide reasons for funding procedures that shouldn’t be funded by a public healthcare system. Consider gastric bypass surgery. Now, this procedure, one might argue, produces the greatest effects on the well being of conscious creatures, but some people think it shouldn’t get funded on the basis that many individuals are morbidly obese, because of bad food choices. Better to fund a social policy that curbs individual’s food choices. Again, I’m appealing luck egalitarian considerations, but the point is that some might find consequentialist approaches as difficult to swallow as luck egalitarian. Thus, it is not clearly the better approach.
LikeLike
Lage
January 22, 2015
Hey Ray (vice? No I won’t call you that, though perhaps you just making a joke),
“I think there are many who would agree with you and argue that luck egalitarian considerations are not sound, but other philosophers would assert that we actually appeal to them in our everyday moral practices. When we appeal to moral intuitions, and try to get people to help others, often we do say, “Well, this person can’t help that they were sick,” and that fact seems to be appealed to as if it were a contributory reason to help them. So, in response, the luck egalitarian might insist that their intuitions are sound, but might bite the bullet when confronted with harshness scenarios or appeal to pluralism.”
If their intuitions are sound, then they would apply in every situation consistently. Since they do not, I argue that it is demonstrated to be unsound, and there is something wrong with their approach, or at least, how they’ve defined it.
“So there are philosophers who agree with you, and some don’t. But what’s really interesting to me, is why you don’t think they are sound, why they don’t constitute a rational system of morality, what constitutes a rational system of morality, and why a system of morality needs to be rational in your sense.”
It all comes down to what moral axioms are chosen as the foundation for any future inquiry, definitions, or system that is employed. I think that most people would agree, if asked, that a reasonable moral axiom is one that asserts that what is “moral” is that which increases the well being of conscious creatures (and in the long term). This moral axiom has no dependence on whether or not a person “got themselves into some bad situation through their own actions”, and furthermore, if one realizes that free will is illusory, and accepts that as a second tenet or axiom in the system of morality, then luck egalitarianism is easily shown to be unsound and silly. Now, granted, it is important to realize that although people don’t have free will, they do respond to deterrents and incentives, and thus continuing to employ some (if not most) of those deterrents and incentives is still sound for pragmatic purposes in guiding their behavior towards that which is beneficial for society. People do have different levels of constraints on their behavior, and even though they have no free will, there are certainly considerations to be made regarding which types of constraints people’s behaviors are ultimately controlled by.
“Indeed, some philosophers suggest that every inequality is an instance of “brute luck” and therefore any inequality needs to be ameliorated. Also, while a world might not contain “free will” –whatever that means– surely many would want the space to say that there are those who have control over their situations and those that don’t. Indeed, along these lines, aren’t there conditions where agent’s control is severely limited or compromised than they otherwise would be? Would agency even exist, in any clear sense of the word, if we simply think free will doesn’t exist? ”
As I said in my last paragraph above, people do have different levels of constraints on their behavior, and even though they have no free will (and thus aren’t a causal agent in any literal sense of the word), there are certainly considerations to be made regarding which types of constraints people’s behaviors are ultimately controlled by.
“Your consequentialist approach is interesting, but some would argue that it is similarly vulnerable to moralistic objections, only it would result in persons being considered for treatment on the basis of a consequentialist morality.”
Yes, but this is simply because a consequentialist morality is the only one that can satisfy the moral axioms I stated earlier. If the true goal of morality is to increase the well being of others (and for the long term), only a consequentialist approach can work, because it takes into considerations how every person’s behavior affects the overall well being of everyone else. A deontological approach simply applies some dogma based on definitions, and doesn’t consider the consequences which is ultimately what is going to reliably predict and determine the well being of everyone involved. Certainly if people adopted a different moral axiom, one that I believe would be less instinctually moral, and thus less beneficial for everyone’s well being, then they could use a non-consequentialist approach. However, I think they could only do so by abandoning the best moral axiom I can think of.
“So, a person who generally tries to enhance their own well being would not get priority over someone who has dedicated their lives to help others. A rich person who donates several thousand dollars to charities would gain priority over a poor person. In the end, adopting consequentialist principles when rationing healthcare, in the way you describe, seems to be just as moralistic.”
It’s more complicated than that, because if a person that has dedicated their lives with the intention to help others actually ISN’T helping others, but making things worse (and even worse for themselves), then the person who tries to enhance their own well being (and does so successfully) while NOT making things worse for others would actually get priority since their actions would be increasing the most amount of well being for the most people (even if only themselves). No, wealth has no fundamental advantage with this approach, because their are many ways to help people in society, and many ways to make things worse. If a rich person’s wealth and the industries and systems they support with their wealth end up causing more societal harm than the poor person, then the opposite would be true, and the poor person would get preference. Again, this whole “preferential rationing” is merely hypothetical, if it HAD to be done that way. Anyway, we can see that traditional ideas of philanthropy or spreading well being can be turned on their heads when other aspects and consequences of their wealth and that which they support is taken into account. Thus, it comes down to analyzing a HUGE amount of data to see the consequences of one’s existence and behavior with respect to the overall well being of society.
“Moreover, some would suggest that it would provide reasons for funding procedures that shouldn’t be funded by a public healthcare system. Consider gastric bypass surgery. Now, this procedure, one might argue, produces the greatest effects on the well being of conscious creatures, but some people think it shouldn’t get funded on the basis that many individuals are morbidly obese, because of bad food choices. Better to fund a social policy that curbs individual’s food choices. Again, I’m appealing luck egalitarian considerations, but the point is that some might find consequentialist approaches as difficult to swallow as luck egalitarian. Thus, it is not clearly the better approach.”
I think it is still clearly the better approach, because the ultimate consequences of everyone’s actions are taken into account, and there isn’t a better method possible than that in terms of most effectively accomplishing any goal whatsoever. Granted, funds may be better spent in other areas of society to address the root causes of many health problems, such as that which curbs obesity in the first place, but that merely reinforces and supports my point — that a consequentialist model applied OVERALL is most effective. I wouldn’t assume that everything else remains static as healthcare adopted a consequentialist approach, but rather I would expect that ALL areas of society would adopt such an approach, since they all must be treated as an integrated whole that can’t be accurately examined in isolation from one another.
LikeLike
Ray Aldred
January 27, 2015
Hi Lage,
Thanks for your reply, and apologies for the error in your name!
Your reply is interesting, but I still think the luck egalitarian has room for some reply. I think pluralists would disagree with your suggestion that a rational system applies moral intuitions consistently in every situation. Pluralists might argue that some other moral principles apply more in some situations than others, hence in one situation you might have some moral intuitions count stronger than they would in other situations. There are also other moral views that hold that moral reasons are sensitive to context, so some considerations might apply in some contexts, but not in other contexts. This view is often called “moral particularism,” and sadly I did not get to mention it in this blog post (due to time and space). So the claims you make about rational systems I think are fairly contentious. While some might agree with them, they are certainly up for debate.
As for consequentialism, I do think it might work in rationing health care –up to a point–, but not in the way you presented. Initially, you suggested that we need to consider whether the person will bring about the most good. But this seems moralistic: we judge treatments based on what sort of person he or she is, and this seems wrong. We need not be moralistic consequentialists though. We can limit our considerations to what suffering or health the treatment actually brings about. We don’t base the treatment on what sort of moral person the patient is, but limit our scope to health relevant consequences for that particular patient. We can also try to do this on a larger scale, and judge whether funding a particular treatment will bring about the best health outcomes for particular agents. However, my worry still remains about the consequentialism you recommend and it being moralistic.
Okay, so my worry still stands about the consequentialism you present. But why? I think wealth would be a relevant factor when evaluating treatments, if we were to do so under your recommendation. You point out that there are many ways that one can help others, and thus wealth would not be a relevant factor when rationing healthcare. But does not wealth give people more opportunities to help multiple others. Consider Bill Gates donating to charities, and virtually eliminating malaria in certain populations. He would presumably be given treatment, if he needed, above a poor child from Nigeria, who never had the opportunities that Bill Gates had. Again, we might think that this inequality is unjust, considering it was not the child’s fault that he or she was born in abject poverty, and this seems pretty good reason to not ration healthcare in the way you recommend.
On a side note, you noted that this these rationing healthcare scenarios are merely hypothetical, but, I want to suggest that, in fact, they aren’t. Health care is a limited resource, often costing money that is equally limited, and people need to make decisions about whether they treat some over others. In many cases, countries have limited funding about what treatments options to fund, and what not to fund. Those that oversee the choice of what to fund are faced with ethical decisions about who to fund and who not to. Worse yet, many treatments are more costly, because pharmaceutical companies know that they can get more money for them. Cancer treatments are typically ridiculously expensive, even if they only extend one’s life for a few more years, because these treatments mean more to the sufferers. So, we are often met with decisions about whether to fund costly cancer treatments over a number of less costly treatments. Moreover, doctors also have to make choices about who to prioritize above others when it comes to vaccinations, and other limited treatments. Rationing health care is a very real item of concern for bioethicisits, and not as hypothetical as you may think.
Anyway, there are notorious problems with consequentialism and utilitarian approaches with rationing goods, and I won’t get too much into what those are –I suspect Rawls abandons consequentialism on the basis that they actually can’t do so fairly, but I digress.
LikeLike
Lage
January 28, 2015
“I think pluralists would disagree with your suggestion that a rational system applies moral intuitions consistently in every situation. Pluralists might argue that some other moral principles apply more in some situations than others, hence in one situation you might have some moral intuitions count stronger than they would in other situations. There are also other moral views that hold that moral reasons are sensitive to context, so some considerations might apply in some contexts, but not in other contexts.”
And my adoption of a rational consequentialism would apply different reactions/solutions to different situations based on the foreseen consequences we can ascertain, thus one is applying a consistent consequentialism to replace inconsistent moral intuition use and implementation. Moreover, by “moral intuitions”, I am ultimately referring to a fundamental moral axiom (produce the most well being for the most conscious creatures, though we can limit this to humans for this discussion) which is supported or reinforced biologically by our innate empathetic and altruistic (even if reciprocally altruistic) predispositions. Because of this specific reference, perhaps me using the term “moral intuitions” is too confusing. In any case, applying a rational and consistent consequentialism that serves our “moral intuitions”, by accomplishing the most physical/psychological well being for the most human beings possible, I think, takes into account all the issues regarding context, and eliminates the need for many conflicting moral principles — because it simply applies consequentialism to that one fundamental moral axiom.
“As for consequentialism, I do think it might work in rationing health care –up to a point–, but not in the way you presented. Initially, you suggested that we need to consider whether the person will bring about the most good. But this seems moralistic: we judge treatments based on what sort of person he or she is, and this seems wrong. We need not be moralistic consequentialists though. ”
It is perhaps moralistic in the sense that it accomplishes the fundamental moral axiom, and judging treatments based on the ultimate consequences to society over the short and long term is indeed most effectively supporting/accomplishing that moral axiom. It doesn’t matter whether it “seems” right or wrong to do so, because our intuitions are largely based on an evolutionary origin and context that is no longer completely relevant, and that we are no longer limited by. That is, because of science and the application of reason and empirical data, we’ve determined more complex ways of accomplishing more effective levels of well being (moral goals). Long ago, we evolved with certain instincts and intuitions because they helped us to survive given what we had to work with and the environment we were living in. After cultural evolution occurred for some time however, we’ve learned quite a bit about how to accomplish moral goals more effectively DESPITE their seeming counter-intuitive or “wrong”. This is one of the problems we still have to overcome, because scientific discoveries often go against our intuitions (even our moral intuitions), and against any other innate predispositions we have, since they evolved for an environment that we’re largely no longer living in. This is why we must apply logic and reasoning to these issues as realize that our “feelings” are no longer very reliable to validate what is a good or bad idea or behavior. When we start to guide our behavior by their consequences, by determining whether particular consequences serve to either promote or inhibit our fundamental moral axiom or goal, then we have a strategy that is no longer muddled with unreliable emotions, feelings and intuitions. We must remember that our emotions, feelings, and intuitions are the way they are for some evolutionary reason, and are still beneficial from time to time in guiding our behavior — however they are more or less hardwired solutions that were likely naturally selected to eliminate the need for having to teach our offspring a most basic moral framework. Now that we have a plethora of knowledge of the consequences of our actions (which is only going to continue to exponentially increase over time), we have new tools that can supercede our instincts in many ways, providing better solutions than merely those that feel right.
“We need not be moralistic consequentialists though. We can limit our considerations to what suffering or health the treatment actually brings about. We don’t base the treatment on what sort of moral person the patient is, but limit our scope to health relevant consequences for that particular patient. ”
I was giving my answer based on the hypothetical that we are rationing healthcare (I’m aware that healthcare is rationed in ways already, so this isn’t 100% hypothetical), and my proposed implementation of consequentialism only serves to maximize the benefits to society. Choosing any other option, seems by definition, to be based on reasons OTHER than consequences, which is clearly irrational and not sound. Don’t get me wrong, I don’t think my approach would be easy — in fact, I think it is one of the most difficult approaches simply because it requires the most knowledge and the most data to more successfully implement it. It is far easier to simply apply “moral principles” or some heuristics to determine what to do with the most efficiency, and in order to “feel” right when we implement those principles. But they aren’t going to be the best methods available.
“You point out that there are many ways that one can help others, and thus wealth would not be a relevant factor when rationing healthcare. But does not wealth give people more opportunities to help multiple others. Consider Bill Gates donating to charities, and virtually eliminating malaria in certain populations. He would presumably be given treatment, if he needed, above a poor child from Nigeria, who never had the opportunities that Bill Gates had. Again, we might think that this inequality is unjust, considering it was not the child’s fault that he or she was born in abject poverty, and this seems pretty good reason to not ration healthcare in the way you recommend. ”
If you read what I wrote closely regarding your mention of the “treat the wealthy preferentially” argument, you’d remember that I said that a person being wealthy itself and the systems and corporations that they promote with their wealth can severely downgrade their net well being to society. To give a very simplistic example, let’s say that Bill Gates is wealthy largely BECAUSE he exploits 2nd and 3rd world countries for cheap sweat shop labor (even for Microsoft electronics), because he violates environmental regulations from time to time, and because he dodges taxes with overseas bank accounts. Let’s assume this is the case. Then even if he gives 1 billion dollars to charities, the large number of human rights and environmental consequences and the detriment to our government’s available budget (which includes entitlements for the poor and ironically “rationing health care”) may not be an even trade, let alone superceded by the charitable donation. In this case, it may be that Bill Gates is better off NOT receiving treatment over some poor Nigerian because that which he supports is causing more harm than good to people in the world. Yes, wealth often affords people more opportunities to help others, but if their wealth was only possible in the first place by decreasing the well being of others (which is most often the case), than we must crunch some more numbers and take more into consideration to see the reality of net well being. Most people simply lose sight of the fact that environmental and human rights costs exist that aren’t usually integrated into the price of a product or used in the determination of whether a rich person is good or bad to society. Consequentialism addresses these by taking all of this into account and making it as fair as possible by judging everything based on consequences that affect everyone else in society and their physical and psychological well being. By doing so, it drastically levels the playing field between the wealthy and the poor and other traditionally identified classes of society, because it’s not looking at the same inaccurate metrics that most people (including economists and politicians) look at. In the case of Bill Gates versus a poor Nigerian child, I would look at the environmental sustainability of their lives, their human rights impact (overall), and thus be able to better determine the number of lives that they affect and to what degree promoting or inhibiting their well being.
“On a side note, you noted that this these rationing healthcare scenarios are merely hypothetical, but, I want to suggest that, in fact, they aren’t. ”
What I was trying to imply was that this discussion is based on the “scenario” that we are rationing healthcare, and what “moral” framework we are to use in such a case. I understand that healthcare rationing is a real thing that occurs in society. I was never denying that healthcare rationing occurs by the comments I used — rather I was simply using “hypothetical” to refer to the specific scenario we are applying these moral and philosophical arguments to.
“Anyway, there are notorious problems with consequentialism and utilitarian approaches with rationing goods, and I won’t get too much into what those are –I suspect Rawls abandons consequentialism on the basis that they actually can’t do so fairly, but I digress.”
I’m all ears, if you have reasons for why it is problematic — other than the fact that it is difficult to implement because of the large amount of data needed, which I already realize as a pragmatist and mentioned in an earlier comment. The only problems I can see otherwise are that implementations don’t seem “fair” to many people, but remember, our feelings on the matter are less relevant than the facts of the matter, and thus the facts are what validate whether or not our approach is better than some other…
LikeLike
Lage
January 22, 2015
“If the true goal of morality is to increase the well being of others (and for the long term), only a consequentialist approach can work, because it takes into considerations how every person’s behavior affects the overall well being of everyone else. ”
I should also specify that I meant that consequentialism is the only approach that can “maximize” the goals involved and accomplish it most effectively. Obviously other approaches can increase well being, but they can’t maximize it, nor will they be as effective.
LikeLike