Spectrum of speculation
· 1 minute read
I have found it useful to group positions on artificial intelligence into five axes, each of which has a spectrum of perspectives. I find that understanding someone’s opinion about each axis helps reveal their hopes and fears about AI.
- AI bad — AI good
- AGI far — AGI close
- Slow takeoff — fast takeoff
- Decentralize — centralize
- Anthropocentric — biocentric — theocentric
Net benefit
- AI is net bad for humanity
- AI is net good for humanity
Superintelligence
- Close — AGI could happen in the next few years
- Far — AGI will not happen in our lifetime, maybe never
Takeoff
- Slow — reaching AGI will be a slow iterative process, if ever
- Fast — AGI could begin self-improving and reach superintelligence in a matter of days, weeks, months
Centralization
- Centralized — AI should be tightly regulated, have strict controls
- Decentralized — AI should be accessible to all humans
Human-centricity
- Anthropocentric — AI should be in the service of humanity
- Biocentric — AI is part of nature and will be our successor
- Theocentric — AI is the creation of a new god