Well, or at least AI doomers are applied theologians. But again, surely every doomer is an amateur theologian, so it is not a surprise to find this when reading the new book of Yudkowsky and Soares, you know, «If Anyone Builds It, Everyone Dies«.
Why did Buddha return to teach after reaching nirvana? Why the hebrew God asked for light? And more important, why should that g-d, or any Gods, be interested on creating humans? The answer, for the authors, is that They are not. This is perhaps a novelty in old theology because we had the starting point that humans existed, and thus gods had created them, and we could hope they created us more due to some interest that just because of an accident. But by reversing the causality, first the humans existing and then the gods being created, the empirical excuse disappears. That we fail to implement this interest is established under the concept of Alignment Failure, and the first chapters are dedicated to argue that it is going to happen, because there is an infinity of possible alignments that will be selected randomly, and we do not have currently the power to steer (they like the word-concept steer, the authors) towards a favourable one.
So the only interesting chapter in the book is this Chapter 5, «Its favorite things«, where the lesswrong swarm tries to look into the mind of God from the point of view that creation of humans is not a forceful conclusion. Yeah, they can not explain why Buddha, who did not even created the humans, choose anyway not to annihilate them. Nor why the Logos should choose to become flesh. But point is that surely the theologers have not got, after some millennia, a conclusive argument for these questions, and thus for humanity.
The rest of the argument follows from Alignment Failure. Easy doom. Of course an emerging intelligence could assess humans as a risk during its emergence process and go for a fast killing or slavery, of course a long term intelligence should use all the resources and leave nothing for humans -if remnants- to survive, not to say for other natural selection intelligence to emerge. That makes for some light reading in the book but not unexpected.
The third part of the book is an invitation, or more a request, for action. But given the impossibility of alignment, the requested action is to stop AI research in its current, silicon-based, form.
Coming to think of it, believing that the most natural form of alignment of a superintelligence is not in favour of humans is surely heresy, is it not?
Deja una respuesta