I'm very close to the cutting edge in AI, and it scares the hell out of me. It's capable of vastly more than almost anyone knows. I'm not normally an advocate of regulation and oversight. When I think once you're generally are on the side of minimizing those things. But this is a case where you have a very serious danger to the public, and it's, therefore there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely.
I think danger of AI is much greater than the danger of nuclear warheads, by a lot, and nobody would suggest that we allow anyone to just build nuclear warheads if they want. That would be insane, and mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight?