A simple argument that AI poses nontrivial danger
AI (extinction) risk arguments, I feel, often assume way, way too much - and then don't make those assumptions explicit, and end up with a mess that is "not even unfalsifiable." That does not mean that AI risk is not "real", or that it is not high, and in fact its mitigation can be hurt by these arguments - by not matching the eventual dangers, they might make those dangers entertain less scrutiny or attention. That's why I prefer assuming, "knowing", as little as possible, and having a very flexible view on "where does the risk come from?"
"My" candidate for a simple argument for AI risk, broadly construed. By the virtue of its simplicity, it doesn't try to imply anything about the likelihood of the AI extinction, it just nods at the possibility. And yet I feel that should be enough, and find all this talk about the precise p(doom) to be distracting at best.
The "rapid and dramatic change" argument
Premise 1: AI is likely to effect an astoundingly rapid and dramatic change of every aspect of life on this planet; more change in a short time than has happened since the dawn of our species.
Premise 2: Rapid and dramatic change is difficult to steer precisely, particularly when it involves many powerful stakeholders who have not necessarily aligned interests (powerful nations, corporations, individuals, AIs) and when it involves very novel technology, such as advanced AI will be.
Conclusion: It is possible that the change effected by AI will be extremely unaligned with human flourishing.
"Deep agnosticism" and its consequences
Assuming just the soundness of the above argument, and being agnostic about any specific proposed source of AI risk - which course of action does that suggest?
I'd say that you want to be in a civilization that is able to mount sane and effective responses to unforetold circumstances. You want that at the level of your governments, of course, but you also that want that in your sphere of public intellectualism, in your AGI labs' executives, in your group chats.
You want to as deeply understand the technology as it is possible to understnad it. Unpredictability of the technology compounds the risks presented by civilization inadequacy.
You want peace, the rule of law, the opportunity for newly arising stakeholders (i.e. AIs) to enter into mutually beneficial trade agreements with you.
Lastly, and very speculatively, I'll say that you might need a lot of AIs - wiser than the current humans - around, to help you navigate this turbulent period.