Yeah, the Three Laws were/are something of a strange concept. They're very deceptively simple:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But, in fact, it requires a LOT of abstraction and thought to fully understand these, in a way I don't believe any amount of programming can ever hope to equal. The very concept of 'human being', alone, is rife with ambiguity and interpretation, which is explored quite a bit in Asimov's fiction, most particularly in two places, the short story "That Thou Art Mindful of Him", where the robots decide that THEY deserve the title of Human over that of the biological species, and in 'Foundation and Earth', where Solarian roboticists fudge the definition of 'human' to include only their particular branch of humanity to the exclusion of all others... which only goes to show that even the three laws can't protect humans from the machine apocalypse...
Despite this capacity for abstraction in understanding definitions like 'human being', Asimovian robots are surprisingly (or not) incapable of critical analysis at times, as demonstrated in Lije Baley/Daneel Olivaw novels in the rather elaborate means in which some of the murders are carried out that often hinge on a robot doing something mind-numbingly naive, like preparing a poison, but never thinking about what would motivate a human to request such a thing that wouldn't involve harming someone. This begins to seriously undermine the whole idea of robot intelligence after a while and almost leads one to believe that Asimov might have really thought that robots were just dumb tools in the shape of humans... but maybe I'm being a bit harsh.