What bothers me about AI is not the potential to replace jobs or precipitate doomsday scenarios (which is the aspect on which many media concentrate) but the continuation of the problems already experienced with "traditional" new technologies.
1) "Computer says no". Every time a "new" use is found, the companies using the technology insist it is foolproof. Evidence of error is denied, people reporting errors are branded as fraudsters or idiots. Then it turns out there is a genuine problem. The example that springs to mind here was the early use of ATMs. When people reported phantom withdrawals they were told it was impossible and the error must be theirs. Turned out there was a real problem.
Also it's virtually (ha!) impossible to find out why a particular decision has been made. Automation can't explain its "reasoning". Trying to appeal against a decision when you have no idea how it was made is near impossible.
2) Garbage In: Garbage Out. Computing and automation is only as good as the software/hardware it uses and that is only as good as the wetware that produces it. People introduce into the systems personal assumptions and prejudices. There's been a problem as long as computers have existed that programmers are not particularly representive of the world as a whole and so not very good at producing systems that reflect how people actually behave as opposed to how the programmers would like them to behave (looking at you, spydus!). Voice recognition/face recognition/whatever turn out to be quite good when assessing young white males (disproportionate users of technology) and much poorer at recognising other groups.
3) Outliers. As human beings we tend to opt for solutions that fit our experience. We reject unlikely possibilities (well. except for conspiracy theorists). Automated systems tend to opt for "average" solutions. Most people are not "average" (how many people are "average" weight and height?) but the further you are from what is determined to be the norm, the more likely you are to be excluded. That already happens in human interactions but computers/automation tend to solidify the problem and, as with other problematic aspects of AI, there's no appeal.
4) Privacy. This is a question regularly sidestepped at the minute. Every time an organisation decides, for instance, that paying for parking can only be done by card or smartphone, they are forcing people to share information with organisations with whom the parker may not wish to interact. There's no control over the numbers of organisations that may be involved and there's no control or knowledge of how that information is being stored and used. There are a lot of weasel words about "transparency". This data collection is rarely benign. It isn't there to make your life "better" in any meaningful way: it is there so that organisations can "nudge" your behaviour into ways from which they can make money.
5) Finally, the need to consider AI rights which will involve the rights of other sentient beings, a road down which governments and businesses are reluctant to travel because it would affect their ability to exploit the natural world for profit.
I can think of a whole lot of other things (increasing asymmetric access to information, for instance, which erodes decision making and personal control). These problems already exist and yet little is being done about them - because exploiting people for gain is regarded is a right. That people also have a right not to be exploited is ignored because there's no money in that.