Super Intelligent A.I. is Neither Necessary nor Desirable
A.I. Risk and the dangers of the cult of AGI
Hey Guys,
The folks on LessWrong are interesting philosophers, but not always very rational. Transhumanists seem to think AGI and SAI are wonderful potential creations. I’m interested in the question of A.I. Risk given a Superintelligent A.I. that is more or less defined as “smarter than the sum-total of all of humankind.”
What would such an intelligence be like? How would it see itself? How would it treat human beings?
While we are not yet at the point where such an entity is feasible, however already we find some semi-serious debate and many serious researchers, academics and analysts who think such a superintelligent A.I., that that eclipses the sum-total of all human intelligence, is itself inevitable.
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.