I need sci fi fans to please be normal about the Three Laws of Robotics. These are not a serious proposal about AI ethics – they’re a narrative logic puzzle created to facilitate writing detective stories about robots. They’re the sort of thing you’d invent to annoy Daniel Craig if you made a movie where Benoit Blanc goes to space.
A couple of folks have commented that the Three Laws are like the Prime Directive in Star Trek, but that’s not really it.
The problem with Star Trek’s Prime Directive is that half the time the shows are genuinely trying to take it seriously and crashing face-first into its moral inadequacy.
The Three Laws of Robotics, conversely, are explicitly designed to fail. Their storytelling function is to create verbal logic puzzles with tangible consequences.
When a robot in an Asimovian detective story manages to twist “a robot may not injure a human being or, through inaction, allow a human being to come to harm” around into something horrifying by playing semantic games with the definition of the word “human”, that’s not trying to make a profound statement about the nature of humanity – it’s presenting a puzzle-box for the reader to solve, and challenging them to figure out how the robot did it before the detective does.
















































































![[trump voice] They’re calling it prison abolition, folks, complex—prison industrial complex, they’re calling it. Very disgusting, very sad [crowd boos] I’m the first person to say, perhaps that none of us are free until all of us are free, and you’ll be hearing it more and more
— holden m. accountable (@noahpasaran) May 30, 2024](../media/903edf8744bd6b9a4b6cfdc6eae1cc3982f40f67.jpg)
























































































