Probably a cold take but wasn’t Isaac Asimov’s point was that the Three Laws of Robotics doesn’t work?
I think it’s more like “any system of codified ethical behavior, no matter how well-meaning and well-designed, will ultimately run up against difficult decisions which it cannot totally answer in all their complexity.” Kind of a Gödel’s incompleteness theorem, within the field of ethics.
Yea, so its relevancy in discussions about AI would be more of a cautionary tale than a foundation, right?
There’s also the fact that many of Asimov’s early robot stories are detective fiction in sci-fi clothing. In this context, the Three Laws are basically a way of back-dooring logic puzzles into detective stories: a human investigator (and their helpful robot sidekick) shows up, learns that a robot has been accused of something that ought to have been impossible under the Three Laws, and has to figure out how it happened. The cautionary-tale stuff mostly came later, once Asimov started taking his own plot device more seriously, but the Laws never entirely stopped being a logic-puzzle-enabler.