We are at a tipping point IMO, with respect to what AI can do for... or TO... our society.
As with many things, it can be a great timesaving tool... or it could literally destroy society as we know it (only slighly hyperbolic).
As the algorithms get better and better, and it becomes harder and harder to distinguish AI-generated from human-created, entire processes that were once manual could now be almost 100% automated.
We're already seeing the impact on music, art, pr0n, but the high-profile media-focused clickbaits are just the tip of the iceberg.
1. Entire industries could see mass layoffs as AI takes over major functions; cumulatively this could be disastrous.
2. We already have a hard time telling deep fakes from reality with respect to news fabrication and manipulation; it's poised to become even worse during election cycles... the term "fake news" was never more accurate.
3. Lawyers are chomping at the bit: the legal and ethical entanglements are ginormous. Copyright and intellectual property transgressions loom all over the place, not just in the arts. Schools, colleges, and universities are struggling with how to deal with AI "tools" when it comes to assignments, research papers, theses, and published articles. If you can prompt an AI tool to spit out a term paper for you and you just tweak it, have you actually done any research? Is being published actually proof of accomplishment or intelligence anymore?
The biggest problem right now is the vetting of what any given AI tool spits out vs. trusting it blindly; we've seen this recently with some high-profile Google AI gaffs... this has the potential to be quite dangerous.
It may not actually be SkyNet 2.0, but there are aspects that are VERY concerning...