20 February 2009

Future minds

Some day, perhaps, my biological colleagues will be using [computers] to simulate many processes including the chemical complexities within living cells, how combinations of genes encode the intricate chemistry of a cell, and the morphology of limbs and eyes. Perhaps they will be able to simulate the conditions that led to the first life, and even other forms of life that could, in principle, exist.
So wrote Martin Rees in a recent article [1].  But Rees, who is giving a talk on 23 February about the world in 2050, thinks that there is a long way to go before "real machine intelligence" is achieved. [2] I know of at least two grounds on which people question this. The first asks you to consider a continuum which extends far lower and higher than the upper and lower bounds of human intelligence, which for convenience are 'village idiot' and 'Einstein'. Machine intelligence (according to this argument) may at present be far below 'village idiot' and inferior, even, to an earthworm in important respects; but it is, or is soon likely to be, increasing fast and by the the time it approaches the lowest level of human intelligence it is likely to be traveling so fast that it will pass by both the lower and upper bands of human intelligence in a very short time. Further (the argument continues), we cannot say with confidence that this will not happen within a few decades. [3]

The second argument is that Rees is looking at the wrong thing. Very roughly speaking, the intelligence that is changing is that of extended mind or minds in which individual human brains are only a part. [4] New technologies are changing the nature of cognition and experience in profound and significant ways. The Minority Report-type technology of the kind shown below could be just a start.


[1] Mathematics: The only true universal language

[2] The example Rees gives in support of this assertion-- that a supercomputer may be able to beat grand master at chess but cannot recognise and manipulate the pieces on a chess board as well as a five year old human -- does not help build a strong case. See, for example, this from Conscious Entities.

[3] This is my very crude account of an argument outlined by Eliezer Yudkowsky at a conference on global catastrophic risks. The hope is for Gandhian AI. An intelligence without compassion could be like the psychopath mentioned by Daniel Goleman here. [Intelligence may be more than one thing in more than one dimension, or course.]

[4] See, e.g., A new kind of mind and Andy Clark's Supersizing the Mind, alluded to at Out of Mind.

Baby Southern Keeled Ocotopus. Photo credit John Lewis

No comments: