Unless they actually have the ability to come up with their own unique goals, instead of just being really good at the goals programmed in by humans, they're not 'Our children', they're tools.
You do realize that creativity is nothing more than the ability to connect a variety of different information humans have been exposed to throughout their limited existence?
Compared to an computer algorithm... a human is able to accumulate only a small fraction of knowledge in any given field and would take them hundreds of thousands of years to learn everything we accumulated to date on just that 1 subject they specialize in through reading (no rest, no sleep, no eating, etc.) - and keep in mind we have NEW information coming out in practically every field on a daily basis at this point.
On the other hand, a simple algorithm can easily go through the collective knowledge of all humanity in a fraction of the time, find patterns from MULTIPLE fields/areas, and come up with solutions thousands of times faster than any human possibly could.
In fact, computers were already demonstrated doing this years ago.
To be fair, I doubt a general AI would pose a threat to humanity.
If anything humanity poses a threat to itself because majority of it wallows in ignorance (which leads to chaos and problems).
Then they transpose those limits to AI and computers and make grandiose assumptions based on badly made hollywood movies whose understanding of science and technology (not to mention human behavior) is practically 0.
In fact, when humans are exposed to relevant general education, critical thinking and problem solving, they usually have 0 intention on hurting others.
A general AI would likely be no different.... unless of course its core programming was limited to negative patterns such as competition, greed, and military aspects... obviously, if you limited a human to those conditions, they would probably end up being nothing more than a trained killing machine that obeys orders (which is incidentally what the military does).
And if an AI went 'rogue' with just that limited baseline, it would act in accordance to what it knew... like any other human when they behave in a certain way without knowing any better.