AI is like a knife - it can cut bread and it can kill. A knife by itself can neither cut bread nor kill, for this to happen a human must pick up the knife. And then everything will depend on his intentions.
The word "intelligence" is included in the name of AI for marketing purposes: even the most modern AIs do not have intelligence. At all. Their work is based on probability, not on reason. These AIs also do not have free will.
Therefore, talking about the risks coming from AI is the same as talking about the risks coming from knives. With modern AI, all the risks come from people who have access to the AI and their intentions. All modern AIs are controlled and monitored by humans, and without humans they are just software, like some Microsoft Word, waiting for you to type on your computer.
In the fairly near future (a horizon of about 10-30 years) there will be AIs with what can be roughly called free will - these will be AIs that will work on quantum computers, just as AIs now work on servers. The combination of quantum computers and software that we today call "Artificial Intelligence" can "give birth" to real AI that has reason.