I would argue that probably the biggest danger of AI is how it is currently set up to be the final death knell for privacy. We utterly failed to step up and protect privacy during the social media and digital ad era of the last 15 years or so. AI is going to take all those privacy issues and put them on steroids. (My day job is a consumer privacy advocate, tractors and working in the forest is just way more fun).
Right now the number one stated use people cite for generative AI is companionship/therapy. That means they are offering up all their most intimate, personal selves to AI because they find it helpful. I'm not judging whether it is helpful or not, as I know people can feel lonely or simply enjoy chatting with something that is low pressure. But when you think of all that personal information going to big companies with terrible track records on privacy, it's really scary. Couple that with stated goals companies have to collect everything about you -- your emails, calendar, travel preferences, likes, dislikes, financial data, emotional states, health data -- to offer up helpful AI agents to make your life easier, our privacy is doomed. And that really worries me because I think humans are better off when they have access to privacy rights.
There are things we can (and need to) do to change this and make our personal information work for us, rather than for the companies so that AI actually can help us in the future. The problem is, that would take real work with policymakers and people like us demanding better. And right now, there is too much else going on to focus on this. Which is a shame, because I truly believe the loss of our privacy now and accelerating into the future will be one of the biggest issues we humans face in the next few decades.