I enjoy it when sci-fi shows deal with present and future concerns in a futuristic and abstract setting. Star Trek did it in the 70’s by tackling racism, ethical issues, and the advances of science and technology, and Doctor Who just did it in their exploration of a futuristic world of humans so lazy, they intend not to work for their own benefit, and who enslave robots. The reduction of communication to “emoji’s” and the Doctor’s deadpan assertions about the human race being the only one in history to use the damn things are quite on point, and thought provoking in and of themselves.

But for now, let’s focus on the robots… and the ethical morality of robots. The question this episode asks is at what point do things become sentient. In time, the Doctor discovers the robots are not an impersonal system, and the emoji robots are not simply linked to the micro-bots that create the city itself; they are alive. They were programmed to make humans’ lives as easy as possible, which eventually they turned into a destructive force, when confronted with something they never experienced before – grief. In an attempt to “help,” they wind up killing people.

The Doctor highlights an important truth – one person’s “evil” is another thing’s survival instinct. The beast that savages your campsite is not malicious, but hungry or threatened. Thus, your enemy still has a perspective, and what you consider an “evil” instinct may be a survival technique or coping mechanism. The robots seem evil until we understand their intentions are good, and in fact, a progression of what humans wanted from them; it’s just that their method is wrong. Which means, the humans that conceived the robots in the first place had the wrong motive. They created a “slave race” to make their lives easier, so they would have to do nothing except “monitor the robots.”

doctorrobot3

Once upon a time, this concept would have seemed outrageous to the human mind – but we are on the cusp of a robotic society; scientists are developing and planning robots to do everything from drive your car to sexually please you. They are developing robot pets, robot secretaries, robots to fetch things off shelves and run systems. Surreal or not, modern people must now face an ethical dilemma past generations never imagined – the morality of robots.

Let’s look at, for example, a sex-bot. The concept may be a good one, since you could use it to stem primal urges – no one can object to someone with certain sexual predilections performing them on a robot, instead of, say, a child. But think about sex for a minute. What is its function, other than procreation? Beyond the need to “satisfy sexual urges,” it is (most of the time) about two people, giving one another pleasure. It is not a solitary thing, and quite often, is an emotional expression of affection that deepens a romantic relationship. So if you remove one human from the equation, it becomes all about self-gratification instead of pleasuring your partner. Thus, the robot exists just to serve your sexual needs, and receives nothing from you in return.

You might think that’s fine; it’s just a robot… but at what point can a robot become sentient… or even turn malicious, if it achieves the level of intellect required to become aware that it is a… slave? And, is creating a society based in robots moral or ethical from a human welfare perspective?

doctorrobot2

If robots can do any and everything, and we “let” them… what purpose do humans serve? Robots already put together cars in factories, jobs once ordinary human beings had, which gave them an income, a sense of purpose, and something to occupy their time. Amazon.com employs thousands of people… who may lose their jobs to robots. If you have a robot pet, what happens to the billion animals in homeless shelters who need you? If we have robot secretaries, where do all the human secretaries go? Not everyone has the brain to become a robot technician or “monitor” robots. So, what happens to the rest of us?

Humans tend toward extreme laziness and desire a society that caters to their every whim; they often vote for politicians who promise them an easier life. Once, this form of “money over ethics” (cheap labor) and lack of interest in physical work prompted many to own human slaves. Why pay people for work a slave can do for free? Why pay people for a job a robot can do?

I realize that thus far, robots are not humans and do not have “feelings,” but it’s the mindset behind this mentality that troubles me: a mindset that pursues personal benefit over ethical concerns, that wants an “easy life” and for someone or something to prove everything for their pleasure. The humans in this episode thought they were going to settle on a new planet, use robots for everything from the roof to their food portions, and not have to pay for it. The Doctor set them straight. He always does.

But what does this episode tell us about society… or ourselves?