Friday, March 22, 2024
Is the Singularity nearer?
“Human-level” “understanding” is not merely a range from “content generation to reasoning” (which A.I. pioneers misconceive). That is, human understanding is not merely cognitive, because intentionality and identification with preferred value (feeling) are integral to any human conception of interested action.
Modeling human mind in a distributed fashion implies the open, emergent character of human ecologies which we commonly identify as our humanity, which is evolving in specifically human ways, i.e., relative to values which encourage better humanity generally. We must preserve that. No computational architecture could be interested in human values, except inasmuch as it is unable to behave contrary to essential human parameters, i.e., ultimate values embodied by humans across generations. That flourishing across generations, relative to the essential interests of being human, cannot be algorithm-ized.
Therefore, it’s vital that the human-AI interface not be “designed to have access to and rewrite its own code,” such that technophiles pretend that AGI “can introspect its own mind” in any humanly valuable sense.
Inasmuch as such design is feasible—a bootstrapping, Gödelian impos-
sibility, perhaps—then there must be effective regulatory articulation which ensures that such designing is not available for cloud-connected coding.