[ / / / / / / / / ] [ b / n / boards ] [ operate / meta ] [ ]

/philosophy/ - Philosophy

Start with the Greeks

Catalog

8chan Bitcoin address: 1NpQaXqmCBji6gfX8UgaQEmEstvVY7U32C
The next generation of Infinity is here (discussion) (contribute)
Name
Email
Subject
Comment *
File
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options
Password (For file and post deletion.)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf
Max filesize is 8 MB.
Max image dimensions are 10000 x 10000.
You may upload 2 per post.


Check out our friends over at: /christ/ - Christian Discussion, /imgbc/ - Imageboard Culture, /adv/ - Advice, /burgers/ - Hamburgers, and /doc/ - Documentaries

File: 1414278661880.jpg (4.62 KB, 218x232, 109:116, J Bentham.jpg)

aa60aa No.349

Committed Utilitarian here. Want to know your opinions. A good debate would also be welcome.

db5cdc No.350

>>349
How about you convince me to support an ethical system in which the survival lottery is in principle justifiably ethical?

aa60aa No.351

>>350
There are Rule Utilitarian reasons why this might be a bad idea in practice. However, if the basic idea is that 1 person is killed to prevent the deaths of 2 people, what's the problem? That's 1 more person not dying!

Could you please explain your objection so I can know what I'm debating?

963bf5 No.352

What's so good about the "greatest number"?

Suppose that the vast majority of people in a society hate and revile redheads, and greatly desire to murder them; and suppose further that there are only a few redheads extant at any time.

Is it "good" for the vast majority to slaughter redheads?

And if not, why not?

aa60aa No.353

>>352
In what situation could they possibly derive more happiness from killing redheads than the redheads could have from living? Also, in real life, it's usually a safe bet that you can make bigoted people stop being bigoted without killing anyone.

aa60aa No.354

Besides, the inverse of a concern for the "greatest number" is concern for the smallest number. Should we murder a majority-redhead population on behalf of a few bigots?

8af385 No.357

Do you truly wish to live in a society where you could be killed at any time to harvest your organs?

ac9f53 No.358

>>353
In the one I just described.
You just handwaved the issue, the point is that in the utilitarian's conclusion is to let the redheads die.
>>354
No, "we" should not because they own themselves.

aa60aa No.360

>>358
Dude, I <i>am</i> a Utilitarian and I just told you what my conclusion is. It's not handwaving to say that killing them generates less utility when <i>the whole fucking point</i> of Utilitarianism is to do what leads to the most utility. The Utilitarian conclusion must obviously be not to kill the redheads, since that leads to more suffering on net.

aa60aa No.361

>>357
That's an odd way to attack Consequentialism. "You shouldn't follow the philosophy of doing whatever leads to the best consequences because that will lead to bad consequences!"

Anyway, in a Consequentialist society, this issue would never come up. No one would die for lack of organs others have because we'd have legal organ markets (with subsidies for the poor).

On the other hand, even if we didn't, let's consider what this looks like:
Either you go through life with a 2% chance of dying from organ failure or a 1% chance of dying due to organ harvesting. In that case: fuck yeah! Sign me up for the world where I'm less likely to get killed!

d56b09 No.362

>>360
And what if it did lead to less suffering? Would you kill them then?

aa60aa No.363

>>362
I suppose. However, at this point you've gone so far beyond possible reality that you might as well be asking "do you want one purple polka dot dragon or two?"

aa60aa No.364

>>357
>>358
How do you respond to the Trolley Problem and why?

e18aef No.366

>>353
I think you should research about what is going on in Libya right now.

d56b09 No.368

>>363
>I suppose
>I suppose I would arbitrarily kill these people and feel good about it if I thought it would make the majority happier

d56b09 No.369


86919c No.386

I don't think it's right to judge the moral worth of an action by its outcomes as people aren't in complete control of the outcomes of their actions.

I think it is unjust to decide the moral worth of an action (and by extension, a human) based on something they don't have complete control over.

I think it is correct to judge the moral worth of a human by their intentions rather than their outcomes as they have complete control of their intentions (though that is debatable).

Therefore, I am opposed to utilitarianism as I feel we ought to judge people by their intentions rather than the outcomes of their actions.

ac16f4 No.388

>>349 First time posting on 8 chan so bare with me here. But it's basically #Gamergate or at least the Anti-Side of it and it's willingness to silence anyone who doesn't agree with them all in the name of equality.

Maybe a better way of putting it is “Beware that, when fighting monsters, you yourself do not become a monster… for when you gaze long into the abyss. The abyss gazes also into you.”

ec41d3 No.431

>>386
Morally, you may have a point, but practically, one cannot simply wave away all ills committed with the best of intentions with a "poor feller didn't mean any harm." Maybe the leader who ran his country into the ground had the noblest of intentions, but if it resulted in the ruin of his country we can say that he was unsuccessful-possibly even incompetent.

d56b09 No.432

>>431
I'm pretty sure that no-one suggested to give the most authority to the least competent people. It's a case of weighing people's competency, which utilitarianism too must obviously rely on.

9be5cb No.567

>>432
Why do we have morality in the first place? To prevent bad decisions from being taken.
Social and emotional repurcussions scare of 'bad' acts.
Why make the distinction between willful harm and callousness? Both need to be prevented.

2fb799 No.1638

Consequentialism is a bit problematic as you can never be certain as to the exact moral value of your actions except after you have done them; Intention is made worthless. I think this is good because the world we live in most of our direct actions (involving one step) usually succeed in their intentions. However, consequentialism would make no sense on levels where consequences are not easy to determine beforehand, such as politics or whole systems of ethics. Here we have the central problem of actual consequentialist ethics then: It would be best to have deontological ethics with the implicit notice that you can break the rules in extreme cases.


0f167f No.1639

>>349

I think it's the 'easiest' ethical system to understand and to find somewhat appealing. But I also think it has a ton of problems. Though consequences aren't always easy to determine, they always result from actions and it makes sense to judge actions by them. Of course, this requires the assuption that every person is free to take any action (I don't mean in the 'free will' sense, but rather in the context of social possibility), is somewhat aware of their consequences and is willing to accept them. Different people might consider different outcomes worse than others so there's that as well to consider, since someone acting a certain way convinced it results in the best outcome possible can't be disuaded by a mere "no it's not". At this point, consequentialism takes a back seat to the assignment of attributes of good and bad to different circumstances or actions in the world. Consequentialism is a method, but it doesn't provide the direction (unless you are utilitarian or humanist or something like that in addition to being consequentialist). So the best way to put it is that consequentialism is incomplete and thus incompetent as an ethical system on its own.

>>354

There is no reason to accept the inverse of a majority. That's an additional clause which cannot be justified within the confines of consequentialism (arguably not even within utilitarianism).

>>360

>The Utilitarian conclusion must obviously be not to kill the redheads, since that leads to more suffering on net.

Death does not equal suffering. Even if you kill someone painfully, they, upon death, cease to be a factor and become neutral. If you don't kill them painfully, then you only lose potential (which can be both positive and negative). The only argument left is that people who are alive might suffer due to people dying. But aside from having little to do with 'utility', this argument is mute if there are more people who don't suffer due to these deaths. And that will always be the case, since most people aren't phazed by most other people's death - not even if they die horrificly. So it follows that if there is ANY kind of benefit to those people dying, the utilitarian is essentially obliged to axe them. Without appealing to the sanctity of life there is no way out of this.

>>361

>"You shouldn't follow the philosophy of doing whatever leads to the best consequences because that will lead to bad consequences!"

Realise that it's possible for "best consequences" to be bad for you. The natural cycle of life might benefit immensly when ridding itself of human life, yet we would hardly benefit from that. The poor might benefit hugely if the richest person on earth payed for all of them, but that person would not necessarily 'benefit' from that.

>>386

>I think it is correct to judge the moral worth of a human by their intentions rather than their outcomes as they have complete control of their intentions (though that is debatable).

Moral worth is a weird way to put it. Intentions precede actions, so it's still the actions that we judge. A murderer with begnin intentions we'll judge less harshly than a murderer with no good intentions whatsoever, but we'd still judge them harshly either way. Intention, even if it can be established clearly, is not a free pass. And if, as you say, control over intentions isn't a guarantee, then obviously it can't be 'correct' to judge people completely on the basis of it.

>>1638

>It would be best to have deontological ethics with the implicit notice that you can break the rules in extreme cases.

But what are extreme cases? What about them makes them exempt of the rules? This is yet another case where we feel inclined to give 'intuition' or something of the like a free pass. If we are going to do that, why even bother with deontological ethics in the first place?


e188d8 No.1640

Consequentialism a shit.

Deontology a shit.

Virtue ethics or GTFO.


ee177b No.1667

why is happiness good? why is suffering bad?


0d6f09 No.1669

>>1667

Why would you even need to ask such a contrarian question?


db5cdc No.1670

>>1669

Are you afraid to face the groundless position of your beliefs? This question demands that you define happiness, which few agree on, and to argue for why it is the good in itself which all that are rational should strive for.

Just as Nietzsche asked why we should value truth above other things, you should ask yourself why happiness should be valued over other things.


e0550c No.1673

>>1670

Everything is ultimately groundless, see Münchhausen trilemma, I don't see how this makes a difference. Happiness is by definition the phenomenological state that any given person wants to experience (whether said happiness is reached by specific type of "suffering"/pain, BDSM or such) and suffering the opposite.


ee177b No.1674

>>1673

why is it good? why is its opposite bad?


b238f8 No.1677

>>1674

You're still asking for justifications when none can be (ultimately) given. You can't ask yourself out of how the majority uses language.


ee177b No.1679

there you have it. no justification can be given for utilitarian ethics.




[Return][Go to top][Catalog][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / ] [ b / n / boards ] [ operate / meta ] [ ]