REFLECTION/DISCUSSION/WRITING QUESTIONS

1. Again, one of the difficulties facing utilitarian emphases on consequences – positive or negative – is just whether or not predicted consequences, especially further in the future, can be counted on to come about as promised. In particular, by 2018, “no empirical evidence” had been found to support any of the positive benefits Levy promised in 2007 . Specifically, “rather than protecting sex workers, the dolls might fuel exploitation of humans” – that is, supporting one of Richardson’s key objections to Levy’s arguments and sexbots more generally.

Review Levy’s list – and see if you can think of any additional positive benefits of sexbots that he might have missed? And then, referring either to Davis and/or the academic study she discusses, add the more negative consequences that might also well accrue.

Don't use plagiarized sources. Get Your Custom Essay on
REFLECTION/DISCUSSION/WRITING QUESTIONS
Just from $13/Page
Order Essay

Using your best utilitarian calculus skills – what is the result? That is: sexbots – yes, no, and/or maybe?

Discuss your analysis – including what you think are the one or two most likely positive and negative benefits.

And: in your calculus, how did you determine how many positive utils and negative utils to assign to the diverse possible consequences?

2. Virtue ethics approaches: loving your sexbot – while s/he is faking it?

A. Levy has acknowledged that sexbots may well not have genuine emotions and desires – but that this doesn’t matter: “if a robot behaves as though it has feelings, can we reasonably argue that it does not?”. Well, yes: in fact, the challenges of creating any sort of real emotion or desire in an AI or robot are so complex that robot and AI designers have long focused instead on “artificial emotions” – namely, crafting the capacities of such devices to read our own emotions and then fake an “emotional” response in turn.

John Sullins (2012) has argued that such devices, however satisfying they might be on an emotional and physical level, are fundamentally objectionable on a deontological level: to intentionally deceive humans in these ways is to violate the respect for human beings owed to us as autonomous and equal Others.

On the other hand, it seems very likely that all long-term intimate relationships involve at least occasional “faking it” – that is, pretending to respond to the amorous desire of one’s lover with approximately equal desire. These sexual equivalents of “little white lies” may likewise be necessary components of human sociality – that is, well-intentioned deceptions that may help our relationships work more smoothly, or even flourish more fully in the long run.

But this raises the question of analogical argument. As you reflect on the above:

(i) is the analogy between “little white lies” and unwanted sex (at the extreme, marital rape) a good one? Why, and/or why not?

(ii) what about the analogy between a loving partner occasionally seeking to please his or her lover by “faking it” – and a sexbot intrinsically incapable of experiencing or expressing genuine emotion and desire, and which (who?) thereby is constantly faking it?

Given your analysis of these arguments and analogies – do you agree more with Levy that artificial emotion is enough, and/or with Sullins that artificial emotion is an unacceptable deception?