@rob said in Way too many categories:
And while others here don't seem to have made this connection (yet?), to me the problem has gotten 100 times more urgent very recently, given that we're suddenly in an AI arms race, which a divided society is especially not ready for. It's sort of like nuclear weapons, except that generative AI spins gold right up until it destroys us all. And one nice thing about nuclear weapons is we can be pretty sure that the weapons themselves aren't going to decide on their own to wipe us out. Another nice thing about nuclear weapons is that the people who build them actually know how they work. (generative AI such as GPT-4 is essentially an enormous matrix of floating point numbers that no one on the planet truly understands why it works the way it does)
(hey I've been accused of being alarmist before. Usually I don't think I am. Here, yeah, I'm pretty freaking alarmist.)
I've quoted from a different thread because I think this discussion is possibly more relevant here.
I wrote a paper a few years ago making an argument against the future existence of superintelligent AI, based on anthropic reasoning. You might not agree with the argument or think it's a very strong argument, but I thought I'd leave it with you anyway!