Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

22 November 2012

Wise worm

For more than twenty-five years, scientists have known the exact wiring diagram of the three hundred and two neurons in the C. Elegans roundworm, but in at least half a dozen attempts nobody has yet succeeded in building a computer simulation that can accurately capture the complexities of the simple worm’s nervous system.
-- Gary Marcus.

Unlike Moore's Law for processors, understanding of how the brain actually works, of the computations and circuits that are underlie neural function, is not doubling every eighteen to twenty-four months.

P.S. See also Worms do the wave to translate messages.

24 May 2012

Turing's Cathedral

Turing drew a parallel between intelligence and the “genetical or evolutionary search by which a combination of genes is looked for, the criteria being survival value. The remarkable success of this search confirms to some extent the idea that intellectual activity consists mainly of various kinds of search.” Evolutionary computation would lead to truly intelligent machines. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates a child's?” he asked. “Bit by bit one would be able to allow the machine to make more and more 'choices' or 'decisions.' One would eventually find it possible to programme it so as to make its behaviour the result of a comparatively small number of general principles. When these become sufficiently general, interference would no longer be necessary, and the machine would have 'grown up.'”...
...Organisms that evolve in a digital universe are going to be very different from us. To us, they will appear to be evolving ever faster, but to them, our evolution will appear to have been decelerating at their moment of creation – the way our universe appears to have suddenly begun to cool after the big bang. Ulam's speculations were correct. Our time is become the prototime for something else.
-- from Turing's Cathedral: the Origins of the Digital Universe by George Dyson (pages 262 and 302)


See also Infinite complexity from finite rules

29 February 2012

Animat

Though experiments probing the information structure of the human brain are still in their early stages, mathematical simulations have shown that integrated information can in fact be measured in other systems. Tononi and his colleagues devised a system so simple that its phi [a measure of integrated information] can be calculated — a simulated animal called an animat. Relying on sensors that detected the environment, actuators that allowed it to move and places to store data as it learned, this animat worked its way through a computer maze. The animat also possessed an ability that most living organisms take for granted: It could gradually evolve over 50,000 generations of maze running.

At the start, the animat had a hard time navigating. But around generation 14,000, it got good. Along with this performance boost, the animat’s phi, the amount of information successfully shuttled among its constituent parts, went up. Different bits learned to communicate. By generation 49,000, the animat whizzed through the maze with its high phi.
-- report, paper.

1 January 2012

You, robot


2012 is the centenary of the birth of Alan Turing, the second world war code-breaker who dreamed up the test in 1950 while pondering the notion of a thinking machine, so expect a flurry of competitions in his honour. Bear in mind, though, that the Turing test is a poor gauge for today's AIs. For one thing, the test's demand that a program capture the nuances of human speech makes it too hard. At the same time, it is too narrow: with bots influencing the stock market, landing planes and poised to start driving cars, why focus only on linguistic smarts? One alternative is a suite of mini Turing tests each designed to evaluate machine intelligence in a specific arena. For example, a newly created visual Turing test assesses a bot's ability to understand the spatial relationships between objects in an image against that of a human. Others want to stop using humans as the benchmark. Using a universal, mathematical definition of intelligence, it could soon be possible to score people and computers on a scale untainted by human bias. Such universal tests should even be able to spot a bot that is far smarter than a human.
-- Paul Marks

30 August 2011

'I am not a robot. I am a unicorn'



'Humans,' notes Brian Christian, 'appear to be the only things anxious about what makes them unique.'

15 April 2011

'Flanked by beast and machine'

The brain, [Bronowski] understands, is not just an instrument for action. It is an instrument for preparation; it both drives the human hand and is driven by it; it is an instrument wired to learn, control speech, plan and make decisions.

...[Bronowski] reminds us that from the printed book comes "the democracy of the intellect" and that humans are primarily ethical creatures.
-- from Tim Radford's review of the reissued Ascent of Man.
In a 2006 article about the Turing Test, the Loebner Prize co-founder Robert Epstein writes, “One thing is certain: whereas the [human decoys] in the competition will never get any smarter, the computers will.” I agree with the latter, and couldn’t disagree more strongly with the former...

...No, I think that, while the first year that computers pass the Turing Test will certainly be a historic one, it will not mark the end of the story. Indeed, the next year’s Turing Test will truly be the one to watch—the one where we humans, knocked to the canvas, must pull ourselves up; the one where we learn how to be better friends, artists, teachers, parents, lovers; the one where we come back. More human than ever.
-- from Mind vs. Machine by Brian Christian

9 February 2010

A trait and an 'illusion'

Here are two points from Jaron Lanier:
People are one of a great many species on Earth that evolved with a 'switch' inside our beings that can turn us between being individuals and back animals. And when we switch ourselves into a pack identity we really change, and you can feel it yourself sometimes when you're online and you get into a conflict with somebody over something silly that's hard to draw yourself out of, or if you find yourself in a group interaction when you're ridiculing somebody, or completely excluding and not communicating with some opposing group. These are all signs of the pack mentality within people. And what we see online is that we have designs that seem to be particularly good at pressing that pack identity button. Human history is more or less a sequence of tragedies brought about by the switch within us being turned to the group or pack mentality and then bad behaviour following from that. And if we have a technology that's good at turning the switch we should be very mindful of it.

I am a great believer that people are in control if they just have the thought that they could be...I don't think machines can become clever. I think all of the claims made for artificial intelligence are false. It's a game we play with ourselves in which we make ourselves stupid in order to make the machines appear to be clever. There are many examples of this. Perhaps the most dramatic was the bankers who were prepared to cede their own responsibility and think that algorithms could tell them about credit risks, causing a global financial disaster. But there's a boundless capacity for people, who are a very socialised species, to give the computer the benefit of the doubt and pretend the search engine actually does know what you want, and none of this is ever true. We don't understand how the brain works. We can barely write good software at all and we certainly can't write software that does what a brain does...The danger though is that so many technologists who are seduced by the illusion [of AI], who want to nurture a life form within the computer, that they make designs that erase humans to in order to create the illusion that it's happening. So there's a very strong alignment between the software designs that I criticize and the tendency to want machines to appear to be coming alive. [1]
Lanier warns against taking matters discussed in his new manifesto out of context, so one should tread carefully, particularly if -- like me -- you have haven't yet read it! With that in mind, and waiting for a copy to arrive, two points:
1) The binary individual/pack animal may apply to a lot of human behaviour, but there's often more it than that. [2]

2) The ideas that humans can be in control if they choose and AI is an illusion are profoundly humanist and challenge what looks like an emerging orthodoxy. They are to be taken very seriously. Even if Lanier is wrong and AI does emerge in the long run, it seems plausible that many will project its existence before it becomes real. There would be several motives for this including vested political and financial interest, and a human tendency to detect agency where it does not exist. [3]

Footnotes:

[1] This is my rough transcript of some of the things Lanier said to Quentin Cooper in an edition of Material World broadcast on 4 Feb 2010.

[2] See, for example, Mary Midgely on Hobbes.

[3] See, for example Pascal Boyer and Scott Atran. In The Chess Master and the Computer, Gary Kasparov quotes from Igor Aleksander's How to Build a Mind (2000):
By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.

15 December 2009

The living world

Peter Singer and Agata Sagan argue that the possession of emotions or consciousness should be a key concern when thinking about how to handle animals and increasingly sentient robots. [1]

But how far should the circle of concern extend? Should, for example, ecosystems -- the results of interactions between living and non-living systems -- have rights and not just instrumental value? In the epic of Gilgamesh the whole forest is sacred.


Footnote

[1] But, as noted in many places including here, robots still have a very long way to go.


The image, added on 15 Dec, is from here.

26 July 2009

The age of criminal and compassionate machines

The researchers...generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed...

...Despite his concerns, [Eric] Horvitz [the conference organizer] said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”

A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”
-- from NYT report about a conference organized in Asilomar by the Association for the Advancement of Artificial Intelligence.

P.S. 27 July: New Scientist report.

22 July 2009

'Continuous augmented awareness'

We are not entering the Anthropocene, the Eremozoic or even the Ecozoic but the Nöocene, says Jamais Cascio.

Image: rock slab from the Apollo 11 Cave, Namibia. Perhaps 25,000 years old.

P.S. 7 Aug: Andy Revkin gathers some comments.

9 July 2009

Mind the many-headed slime

Somehow, this single-celled organism [Physarum polycephalum] had memorised the pattern of events it was faced with and changed its behaviour to anticipate a future event. That's something we humans have trouble enough with, let alone a single-celled organism without a neuron to call its own.

... [Max] Di Ventra speculates that the viscosities of the sol and gel components of the slime mould make for a mechanical analogue of memristance. When the external temperature rises, the gel component starts to break down and become less viscous, creating new pathways through which the sol can flow and speeding up the cell's movement. A lowered temperature reverses that process, but how the initial state is regained depends on where the pathways were formed, and therefore on the cell's internal history.

In true memristive fashion, [Leon] Chua had anticipated the idea that memristors might have something to say about how biological organisms learn. While completing his first paper on memristors, he became fascinated by synapses - the gaps between nerve cells in higher organisms across which nerve impulses must pass. In particular, he noticed their complex electrical response to the ebb and flow of potassium and sodium ions across the membranes of each cell, which allow the synapses to alter their response according to the frequency and strength of signals. It looked maddeningly similar to the response a memristor would produce. "I realised then that synapses were memristors," he says. "The ion channel was the missing circuit element I was looking for, and it already existed in nature."
-- Slime mold to DARPA: Justin Mullins on the future of artificial intelligence.

In The Social Amoeboe: The Biology of Cellular Slime Molds, John Tyler Bonner concludes:
We can see the beginning of an era of enlightenment for slime molds...the day may come where we may hail Alan Turing, along with his other claims to fame as the Robert MacArthur of developmental biology...[but] we still have a long -- and interesting way to go. And the reason we all started working on cellular slime molds is that they were supposed to be so simple!

9 April 2009

The expression of emotion in Man and...

In coming decades, [Minoru] Asada expects science will come up with a "robo species" that has learning abilities somewhere between those of a human and other primate species such as the chimpanzee.
-- from Japan child robot mimicks infant learning.

20 February 2009

Future minds

Some day, perhaps, my biological colleagues will be using [computers] to simulate many processes including the chemical complexities within living cells, how combinations of genes encode the intricate chemistry of a cell, and the morphology of limbs and eyes. Perhaps they will be able to simulate the conditions that led to the first life, and even other forms of life that could, in principle, exist.
So wrote Martin Rees in a recent article [1].  But Rees, who is giving a talk on 23 February about the world in 2050, thinks that there is a long way to go before "real machine intelligence" is achieved. [2] I know of at least two grounds on which people question this. The first asks you to consider a continuum which extends far lower and higher than the upper and lower bounds of human intelligence, which for convenience are 'village idiot' and 'Einstein'. Machine intelligence (according to this argument) may at present be far below 'village idiot' and inferior, even, to an earthworm in important respects; but it is, or is soon likely to be, increasing fast and by the the time it approaches the lowest level of human intelligence it is likely to be traveling so fast that it will pass by both the lower and upper bands of human intelligence in a very short time. Further (the argument continues), we cannot say with confidence that this will not happen within a few decades. [3]

The second argument is that Rees is looking at the wrong thing. Very roughly speaking, the intelligence that is changing is that of extended mind or minds in which individual human brains are only a part. [4] New technologies are changing the nature of cognition and experience in profound and significant ways. The Minority Report-type technology of the kind shown below could be just a start.



Footnotes

[1] Mathematics: The only true universal language

[2] The example Rees gives in support of this assertion-- that a supercomputer may be able to beat grand master at chess but cannot recognise and manipulate the pieces on a chess board as well as a five year old human -- does not help build a strong case. See, for example, this from Conscious Entities.

[3] This is my very crude account of an argument outlined by Eliezer Yudkowsky at a conference on global catastrophic risks. The hope is for Gandhian AI. An intelligence without compassion could be like the psychopath mentioned by Daniel Goleman here. [Intelligence may be more than one thing in more than one dimension, or course.]

[4] See, e.g., A new kind of mind and Andy Clark's Supersizing the Mind, alluded to at Out of Mind.

Baby Southern Keeled Ocotopus. Photo credit John Lewis

27 January 2009

'Writings about Friendly AI'

Joshua Fox has sought out references on the risks and moral issues associated with recursively self-improving intelligence.

8 January 2009

'A new kind of mind'

...When this emerging [artificial intelligence] arrives it won't even be recognized as intelligence at first. Its very ubiquity will hide it. We'll use its growing smartness for all kinds of humdrum chores, including scientific measurements and modeling, but because the smartness lives on thin bits of code spread across the globe in windowless boring warehouses, and it lacks a unified body, it will be faceless. You can reach this distributed intelligence in a million ways, through any digital screen anywhere on earth, so it will be hard to say where it is. And because this synthetic intelligence is a combination of human intelligence (all past human learning, all current humans online) and the coveted zip of fast alien digital memory, it will be difficult to pinpoint what it is as well. Is it our memory, or a consensual agreement? Are we searching it, or is it searching us?...
-- Kevin Kelly.

Not too far away, Nick Bostrum speculates about 'superintelligence' while John Tooby and Leda Comides believe in 'rapid and sustained progress in understanding natural minds':
Humanity will continue to be blind slaves to the programs that evolution has built into our brains until we drag them into the light. Ordinarily, we only inhabit the versions of reality they spontaneously construct for us — the surfaces of things. Because we are unaware we are in a theater, with our roles and our lines largely written for us by our mental programs, we are credulously swept up in these plays (such as the genocidal drama of us versus them). Endless chain reactions among these programs leave us the victims of history — embedded in war and oppression, enveloped in mass delusions and cultural epidemics, mired in endless negative sum conflict.
Mahzarin Banaji looks, merely, for understanding.

17 November 2008

Is it alive?


Nick Carr comments on a proposal for "whole brain emulation" that imagines a software model "so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain." (Hat tip AS)

Carr is sceptical that the modeling could ever truly mirror humanity in part because it does not take account of "free will". I don't want to comment on that here, but the debate puts me in mind of another, possibly related issue: the (alleged) failure of global earth system modeling to adequately reflect the role played by complex adaptive living systems.

P.S. On brain simulations, it is claimed that at least one has already been done -- for half a mouse in 2007 (Towards Real-Time, Mouse-Scale Cortical Simulations by James Frye, Rajagopal Ananthanarayanan, and Dharmendra S Modha)

P.S. 21 Nov: IBM to build brain-like computers

23 October 2008

'Packs of robots will hunt down uncooperative humans'

It sounds like science fiction, but it's not:
What we have here are the beginnings of something designed to enable robots to hunt down humans like a pack of dogs. Once the software is perfected we can reasonably anticipate that they will become autonomous and become armed.

We can also expect such systems to be equipped with human detection and tracking devices including sensors which detect human breath and the radio waves associated with a human heart beat. These are technologies already developed.

P.S. In related news, SIAI reports on a meeting on Metrics for Human-Level AI sponsored by the U.S. Office of Naval Research.

8 September 2008

Ghosts and shadows

Granta 102: The New Nature Writing contains much that is very fine. Pathologies by Kathleen Jamie, for example, is close to as good they come [1]. Robert Macfarlane's Ghost Species is also outstanding [2], but cracks for a moment here:
Historically, the idea of ghosts has been confined to non-human kingdoms. But sitting in Eric's kitchen that day, it seemed clear that there were also human ghosts: types of place-faithful people who had been out-evolved by their environments - and whose future disappearance was almost assured.
'Out-evolved' is a shard of social darwinism, probably accidental and certainly unfitting of an author who surely knows that the processes that have driven smallholders out of the fens has nothing to do with natural selection, and everything to do with economics, politics and history [3]. The displacement (and in many other cases slaughter) of large number of humans by other humans (who may or may not have altered the environment on a large scale) is not natural selection.


If, however, humans were completely displaced, slaughtered wholesale or radically subjugated by another life form (or post 'life' form) then we might be talking (or rather, not talking).

The prospect of such destruction by a superior intelligence or more resourceful predator looks like science fiction to some (see, for example the dreams reviewed here) but not all (including, perhaps some who were here). Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence argues scientists need to work pretty hard to develop Gandhian, 'friendly AI'. [4]

How many humans would a really smart (compassionate) being want around? About six million, as there (probably) were at the end of the Paleolithic? About five hundred thousand, if the abundance of humans were linked to their size and fitted on the curve from mouse to whale? [5] How would these humans be kept in control?


Footnotes

1. Pathologies is not available online, and this is too bad. Quoting a snippet out of context doesn't work well: thought and feeling mobilise and develop through the full arc of the essay. Find and read the whole thing if you can! Among the essays that are available online and are worth reading (besides Macfarlane's) is Second Nature by Jonathan Raban, although he skates over what may prove to be the most significant and lasting Euro-American legacy in the Pacific Northwest: the radioactive waste of the Hanford complex [P.S. 11 Sep: on which see No More Bomb-Making, but Work Aplenty].

2. See this review of The Wild Places. For an earlier example of the use of the phrase "ghost species" in non-technical writing see, e.g., this by Scott Wiedensaul.

3. If history (the first time in tragedies such as "racial hygiene") repeats as farce then Steve Jones may nail it with "evolution is to social sciences as statues are to birds: a convenient platform on which to deposit badly digested ideas"(Darwin's Ghost). That said, Jones may not be completely immune from the virus of social darwinism himself. See his comments on the "iron rule of greed" which I criticise in a review of his book Coral. (For a really good book on coral reefs see Veron. )

4. See also, e.g., Technology That Outthinks Us: A Partner or a Master?

5. Darwin's Ghost by Steve Jones, page 313.

Images

Top: Bagged thylacine, 1869.

Middle: Truganini and the other last Tasmanian aborigines, 1860s.

Bottom: Petroglyph of a thylacine from Murujuga, Western Australia. The animal has been extinct on the Austrlian mainland for several thousand years.