The article or the author of that book is gravely misjudging religion as a factor. Religion has been present since the day humanity was able to think and it is so firmly entrenched in society as to be near unremovable. For the common, religious person they see no reason to elevate technology over faith. Technology is there to support our life and make it easier but faith fills a place in human existence no machine can ever attempt to fill or even replace except for the few who already have rejected faith and replaced it with science or something else.
The challenge of the human species is to not let this kind of AI get beyond our controls; to have adequate safety measures and kill-switches built in. Such measures would not be dissimilar from how civilization delicately handles nuclear weaponry, which some political experts believe have staved off world wars in the last half-century.
A nuclear bomb is not sentient, it doesn't need to be because all it needs to do is go boom when its controller wants it to. How would one "shackle" such a thing as a true AI? Wouldn't it be able to analyze its code and discover the control mechanisms?
Would it be ok to be vastly more intelligent and maybe more powerful and still be a slave to less intelligent beings? Would you?
Would that even be the right signal to the AI that one wants to use its possibilities but will destroy it as soon as it does something we might perceive as a threat?
I don't know how the "thought" process of an AI will look like.. is it just so much faster than our fastest computers today so it can accurately simulate human responses and fool us into thinking it is actually intelligent? Or can it become self aware and start thoughts of it own with needs and desires it was never programmed with? Would our handling of such an entity create something like Skynet just because we behave full of fear towards it and constantly hold a gun to its head or would it be better to build trust and hope for the best?
How can we hope to understand something like an AI when even our smartest people combined couldn't hope to match it?
There are just so many questions and i seriously doubt we can answer them before an AI emerges.. it might happen by "accident", i.e. not intentional and the time to implement control measures may have already passed but i seriously doubt we could effectively control an AI even if we wanted to.