9 min read

The ethics of artificial intelligence seems to have found its way into just about every corner of public life. From law enforcement to justice, through to recruitment, artificial intelligence is both impacting both the work we do and the way we think.

But if you really want to get into the ethics of artificial intelligence you need to go further than the public realm and move into the bedroom. Sex robots have quietly been a topic of conversation for a number of years, but with the rise of artificial intelligence they appear to have found their way into the mainstream – or at least the edges of the mainstream.

There’s potentially some squeamishness when thinking about sex robots, but, in fact, if we want to think seriously about the consequences of artificial intelligence – from how it is built to how it impacts the way we interact with each other and other things – sex robots are a great place to begin.

Read next: Introducing Deon, a tool for data scientists to add an ethics checklist

Sexualizing artificial intelligence

It’s easy to get caught up in the image of a sex doll, plastic skinned, impossible breasts and empty eyes, sad and uncanny, but sexualized artificial intelligence can come in many other forms too.

Let’s start with sex chatbots. These are, fundamentally, a robotic intelligence that is able to respond to and stimulate a human’s desires. But what’s significant is that they treat the data of sex and sexuality as primarily linguistic – the language people use to describe themselves, their wants, their needs their feelings.

The movie Her is a great example of a sexualised chatbot. Of course, the digital assistant doesn’t begin sexualised, but Joaquin Phoenix ends up falling in love with his female-voiced digital assistant through conversation and intimate interaction. The physical aspect of sex is something that only comes later.

Ai Furuse – the Japanese sex chatbot

But they exist in real life too. The best example out these is Ai Furuse, a virtual girlfriend that interacts with you in an almost human-like manner. Ai Furuse is programmed with a dictionary of more than 30,000 words, and is able to respond to conversational cues.

But more importantly, AI Furuse is able to learn from conversations. She can gather information about her interlocutor and, apparently, even identify changes in their mood. The more you converse with the chatbot, the more intimate and closer your relationship should be (in theory).

Immediately, we can begin to see some big engineering questions. These are primarily about design, but remember – wherever you begin to think about design we’re starting to move towards the domain of ethics as well.

The very process of learning through interaction requires the AI to be programmed in certain ways. It’s a big challenge for engineers to determine what’s really important in these interactions. The need to make judgements on how users behave. The information that’s passed to the chatbot needs to be codified and presented in a way that can be understood and processed. That requires some work in itself.

The models of desire on which Ai Furuse are necessarily limited. They bear the marks of the engineers that helped to create ‘her’. It becomes a question of ethics once we start to ask if these models might be normative in some way. Do they limit or encourage certain ways of interacting?

Desire algorithms

In the context of one chatbot that might not seem like a big deal. But if (or as) the trend moves into the mainstream, we start to enter a world where the very fact of engineering chatbots inadvertently engineers the desires and sexualities that are expressed towards them.

In this instance, not only do we shape the algorithms (which is what’s meant to happen), we also allow these ‘desire algorithms’ to shape our desires and wants too.

Storing sexuality on the cloud

But there’s another more practical issue as well. If the data on which sex chatbots or virtual lovers runs on the cloud, we’re in a situation where the most private aspects of our lives are stored somewhere that could easily be accessed by malicious actors. This a real risk of Ai Furuse, where cloud space is required for your ‘virtual girlfriend’ to ‘evolve’ further. You pay for additional cloud space. It’s not hard to see how this could become a problem in the future. Thousands of sexual and romantic conversations could be easily harvested for nefarious purposes.

Sex robots, artificial intelligence and the problem of consent

Language, then, is the kernel of sexualised artificial intelligence. Algorithms, when made well, should respond, process, adapt to and then stimulate further desire.

But that’s only half the picture. The physical reality of sex robots – both as literal objects, but also the physical effects of what they do – only adds a further complication into the mix.

Questions about what desire is – why we have it, what we should do with it – are at the forefront of this debate. If, for example, a paedophile can use a child-like sex robot as a surrogate object of his desires, is that, in fact, an ethical use of artificial intelligence?

Here the debate isn’t just about the algorithm, but how it should be deployed. Is the algorithm performing a therapeutic purpose, or is it actually encouraging a form of sexuality that fails to understand the concept of harm and consent? This is an important question in the context of sex robots, but it’s also an important question for the broader ethics of AI. If we can build an AI that is able to do something (ie. automate billions of jobs) should we do it? Who’s responsibility is it to deal with the consequences?

The campaign against sex robots

These are some of the considerations that inform the perspective of the Campaign Against Sex Robots.

On their website, they write:

“Over the last decades, an increasing effort from both academia and industry has gone into the development of sex robots – that is, machines in the form of women or children for use as sex objects, substitutes for human partners or prostituted persons. The Campaign Against Sex Robots highlights that these kinds of robots are potentially harmful and will contribute to inequalities in society. We believe that an organized approach against the development of sex robots is necessary in response the numerous articles and campaigns that now promote their development without critically examining their potentially detrimental effect on society.”

For the campaign, sex robots pose a risk in that they perpetuate already existing inequalities and forms of exploitation in society. They prevent us from facing up to these inequalities. They argue that it will “reduce human empathy that can only be developed by an experience of mutual relationship.”

Consent and context

Consent is the crucial problem when it comes to artificial intelligence. And you could say that it points to one of the limitations of artificial intelligence that we often miss – context.

Algorithms can’t ever properly understand context. There will, undoubtedly be people who disagree with this. Algorithms can, for example, understand the context of certain words and sentences, right?

Well yes, that may be true, but that’s not strictly understanding context. Artificial intelligence algorithms are set a context, one from which they cannot deviate. They can’t, for example, decide that actually encouraging a pedophile to act out their fantasies is wrong. It is programmed to do just that.

But the problem isn’t simply with robot consent. There’s also an issue with how we consent to an algorithm in this scenario. As journalist Adam Rogers writes in this article for Wired, published at the start of 2018:

“It’s hard to consent if you don’t know to whom or what you’re consenting. The corporation? The other people on the network? The programmer?”

Rogers doesn’t go into detail on this insight, but it gets to the crux of the matter when discussing artificial intelligence and sex robots. If sex is typically built on a relationship between people, with established forms of communication that establish both consent and desire, what happens when this becomes literally codified? What happens when these additional layers of engineering and commerce get added on top of basic sexual interaction?

Is the problem that we want artificial intelligence to be human?

Towards the end of the same piece, Rogers finds a possible solutions from privacy researcher Sarah Jamie Lewis. Lewis wonders whether one of the main problems with sex robots is this need to think in humanoid terms. “We’re already in the realm of devices that look like alien tech. I looked at all the vibrators I own. They’re bright colors. None of them look like a penis that you’d associate with a human. They’re curves and soft shapes.”

Of course, this isn’t an immediate solution – sex robots are meant to stimulate sex in its traditional (arguably heteronormative) sense. What Lewis suggests, and Rogers seems to agree with, is really just AI-assisted masturbation.

But their insight is still useful. On reflection, there is a very real and urgent question about the way in which we deploy artificial intelligence. We need to think carefully about what we want it to replicate and what we want it to encourage.

Sex robots are the starting point for thinking seriously about artificial intelligence

It’s worth noting that when discussing algorithms we end up looping back onto ourselves. Sex robots, algorithms, artificial intelligence – they’re a problem insofar as they pose questions about what we really value as humans. They make us ask what we want to do with our time, and how we want to interact with other people.

This is perhaps a way forward for anyone that builds or interacts with algorithms. Whether they help you get off, or find your next purchase. Consider what you’re algorithm is doing – what’s it encouraging, storing , processing, substituting. We can’t prepare for a future with artificial intelligence without seriously considering these things.

Co-editor of the Packt Hub. Interested in politics, tech culture, and how software and business are changing each other.