At one point he wrote that the robots can choose between which action would be most harmful to humans and act based on that. If two actions are equally harmful, then it would randomly choose. If a robot is in a position like that, where it must do something to protect one or many humans, I like to think of a person in the same circumstance. A person making the choice would be affected by emotion, while the robot would base the choice on logic — Which is more preferable from a holistic point of view? Then again, isn’t that distinction what MAKES us human?