At one point he wrote that the robots can choose between which action would be most harmful to humans and act based on that. If two actions are equally harmful, then it would randomly choose. If a robot is in a position like that, where it must do something to protect one or many humans, I like to think of a person in the same circumstance. A person making the choice would be affected by emotion, while the robot would base the choice on logic — Which is more preferable from a holistic point of view? Then again, isn’t that distinction what MAKES us human?

Startup product manager. Sci fi, Fantasy and Science writer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store