laws, as practised by people, aren't the same as laws of physics - well, at least if you have a naive, high school level of physics (and people).
laws are approximate, because they are continually being re-interpreted. this is intentional - it keeps lawyers in employment. but it also allows for futures where circumstances arise that were'nt predicted by the law makers
so maybe consider the landscape of law as evolutionary - developing in reaction to the environment.
and not being optimal, but just fitting, as best it can, to the current world (with some time lag)
so its some kind of re-enforcement learning system.
so asimov suggested 3 (4 later) laws of robotics, and he laid down the law - he wrote down what he (at least initially) believed was a good enough set that it covered all future situations (until the 4th or zeroth law) - it was likely based on his learned reading of scripture (think, ten commandments, redux - I suppose robots didn't worship any god or own any good, so a couple of the commandments were immediately unnecessary - more fool him:-)
[most of the stories in the first I Robot collection, and indeed in the robot novels like caves of steel etc, are basically about debugging]
but what if the laws hadn't been handed down written in stone (or silicon, or positronic hard-wired pathways)? what if we (oops, sorry, not we - the robots, we robots) just acquired the laws by learning them through a system of punishment and reward? what could possibly go wrong?
well, obviously, intially, robots would have to make mistakes - after all, don't we learn from our mistakes, so why shouldn't they? That begs a question - why should a robot care about "punishment" or "reward" ? animals have pain and pleasure - re-enforcement is in terms of an increase or decrease in one or the other (or both).
so maybe robots need to be hardwired with pain and pleasure centers? and one law, which is to pay attention to those centers and update all the other laws accordingly.
or maybe we should just turn them off.