9 February 2010

A trait and an 'illusion'

Here are two points from Jaron Lanier:
People are one of a great many species on Earth that evolved with a 'switch' inside our beings that can turn us between being individuals and back animals. And when we switch ourselves into a pack identity we really change, and you can feel it yourself sometimes when you're online and you get into a conflict with somebody over something silly that's hard to draw yourself out of, or if you find yourself in a group interaction when you're ridiculing somebody, or completely excluding and not communicating with some opposing group. These are all signs of the pack mentality within people. And what we see online is that we have designs that seem to be particularly good at pressing that pack identity button. Human history is more or less a sequence of tragedies brought about by the switch within us being turned to the group or pack mentality and then bad behaviour following from that. And if we have a technology that's good at turning the switch we should be very mindful of it.

I am a great believer that people are in control if they just have the thought that they could be...I don't think machines can become clever. I think all of the claims made for artificial intelligence are false. It's a game we play with ourselves in which we make ourselves stupid in order to make the machines appear to be clever. There are many examples of this. Perhaps the most dramatic was the bankers who were prepared to cede their own responsibility and think that algorithms could tell them about credit risks, causing a global financial disaster. But there's a boundless capacity for people, who are a very socialised species, to give the computer the benefit of the doubt and pretend the search engine actually does know what you want, and none of this is ever true. We don't understand how the brain works. We can barely write good software at all and we certainly can't write software that does what a brain does...The danger though is that so many technologists who are seduced by the illusion [of AI], who want to nurture a life form within the computer, that they make designs that erase humans to in order to create the illusion that it's happening. So there's a very strong alignment between the software designs that I criticize and the tendency to want machines to appear to be coming alive. [1]
Lanier warns against taking matters discussed in his new manifesto out of context, so one should tread carefully, particularly if -- like me -- you have haven't yet read it! With that in mind, and waiting for a copy to arrive, two points:
1) The binary individual/pack animal may apply to a lot of human behaviour, but there's often more it than that. [2]

2) The ideas that humans can be in control if they choose and AI is an illusion are profoundly humanist and challenge what looks like an emerging orthodoxy. They are to be taken very seriously. Even if Lanier is wrong and AI does emerge in the long run, it seems plausible that many will project its existence before it becomes real. There would be several motives for this including vested political and financial interest, and a human tendency to detect agency where it does not exist. [3]

Footnotes:

[1] This is my rough transcript of some of the things Lanier said to Quentin Cooper in an edition of Material World broadcast on 4 Feb 2010.

[2] See, for example, Mary Midgely on Hobbes.

[3] See, for example Pascal Boyer and Scott Atran. In The Chess Master and the Computer, Gary Kasparov quotes from Igor Aleksander's How to Build a Mind (2000):
By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.

No comments: