The Igor Aleksander Interview

Saturday, 21 November, and off to sunny Hendon for the World Computer Go championships.

Go, sometimes spelled "Goe" (but not very often, presumably lest people confuse it with the Indian holiday resort) is a strategy game from the Far East, described, for some reason, as being like four chess games played simultaneously on the same board (see sidebar for rules and fuller description). As such, by the end of a typical match, you end up with a rashy, speckled effect, that looks as if it needs a shot of penicillin.

Anyhow, my reasons for being in Hendon (Hendon College, specifically) were twofold. First, a Taiwanese businessman, Ing Chang-ki, had bet over a million dollars that no computer program could ever beat a professional Go player. I therefore wanted to watch various computer programmers attempt it. All sorts of tecchies had brought their pet programs along for the weekend to try their luck against an international assortment of experienced Go players. The fact that Ing Chang-ki died two years ago and therefore wasn't in much of a position to pay up even if the computers did win was doing nothing to dampen the programmers' ardour. My second reason for being there was that I was supposed to observe, inwardly digest, and then talk about the theory of Go-playing computer programs with Professor Igor Aleksander, of Imperial College, London.

As regards the observing and inwardly digesting bit, all I can say is, I'm sure that, if you're heavily into it, Go must be quite exciting. Much as, say, a railway timetable must be exciting if you happen to have a thing about trains. But to the uninitiated, it can be rather like watching a game of dominoes, albeit without the adrenaline-rush. Whatever, how were the computers doing?

"Computers haven't done too well in the past, but I think I have a very good chance this year," said Mark Reiss, of King's College, London, one of the leading programmers in the field. Famous last words. Shortly thereafter, his pride and joy (and everyone else's, for that matter) got totally creamed by a couple of ten year old kids. How could this be so? I asked Igor Aleksander five days later. How come that Deep Blue can beat Kasparov at chess, but no-one can develop a decent Go-playing program? He considered my question carefully, and replied:

"What are you talking about? I've got nothing to do with it at all. Nothing. I don't even know what the game of Go is."

So I tried a different tack. Igor Aleksander, Professor of Neural Systems Engineering and Jeremy Isaacs-lookalike, is one of the world's leading authorities on neural networks. These are computer systems that can think for themselves and, apparently, even exhibit such human traits as free-will and, possibly, primitive emotion. According to Go aficionados, Go is so complicated that only a self-adaptive neural network-based system has any real chance of beating it. This is what I said to Professor Aleksander, albeit more circuitously.

"Really?" he replied.

"Yes ........ Boring, isn't it? Let's talk about you and your work instead."

"OK."

Igor Aleksander was born in Zagreb just over 60 years ago. A few months later, the German Army invaded. The family therefore upped and decamped to Italy. With hindsight, this wasn't the wisest of moves. No sooner had they unpacked, than the German Army invaded Italy. The Aleksanders, no doubt by now thinking it might have been something they'd said, moved again. Fortunately, the Third Reich's attempts to further harass the Aleksanders were thwarted by the Allies, leaving the family free to move to the comparative safety of South Africa. It was here, at Witwatersrand University, Johannesburg, that Igor Aleksander picked up a South African accent (still going strong) and his first degree, in Electrical Engineering. "It was the creative aspect of engineering that I really found exciting - the science of effectively making something out of nothing. Through that process, I realised that, my making things, you realise how they actually work."

Then he came to England, where he's now been comfortably settled for 40 years. Like many in his field - Peter Cochrane at BT, for example, and Professor Kevin Warwick of Reading University - he worked first in the telecommunications industry. His first job was at Standard Telephones & Cables. However, the scent of the Groves of Academe proved too strong. In 1966, he went to Queen Mary College, London, to do a PhD in the design of computers for use in industry. "But I could see that the fairly classical design of computers wasn't all that exciting, so I started looking around for other forms of computing, which is where neural networks came in. Except, in those days, we didn't call them neural networks, because they were regarded as a lunatic fringe activity. Instead, we called them 'adaptive pattern recognisers'."

For the benefit of dullards such as myself, I asked him to explain the difference between a normal computer and an adaptive pattern recogniser.

"A conventional computer has lots of different arithmetic and logic units. Its storage unit is just one, great big addressable mass. It's rather like a filing cabinet that brings up whatever is stored at the address you specify. A programmer has to decide its every action. A neural net, on the other hand, learns a bit like the brain does. It captures things out of experience and avoids a programmer having to work out everything in advance. And instead of having one, big memory unit, it can have millions of little addressable stores, all interlinked. They work rather like the neurons in the brain."

That settled, we'll fast-forward to the early 80s, via a few lectureships, readerships, and Professorships, to Brunel University and a neural network system by the name of WISARD (Wilkie, Stonnen, and Aleksander's Recognition Device). The significance of WISARD was that, amongst other things, it could recognise human faces. Co-incidentally, the day before I interviewed Professor Aleksander, Tomorrow's World had run a piece on a new computer system that could also do this. However, TW made out that it was cutting-edge technology.

"Rubbish", said the Professor, "we were doing that way back in 1982! Not only could WISARD recognise faces, it could tell you what sort of expression they had, too; whether, for example, they were smiling or frowning. We trained it simply by showing it examples of different faces registering different emotions, which it then stored to its memory. One of the earliest applications was to identify soccer hooligans."

"What? It could look at someone and say that his eyes were too close together and his forehead too low, so he must be a villain?"

"No, it compared input faces against a database of faces of known hooligans. WISARD also had commercial applications. We built a machine for recognising bank notes, and another that could sit on a production line and scan passing cakes to ensure all the cherries and the chocolate whorls had been put on. Basically, if you had the right training set, you could teach the system to recognise whatever you wanted it to."

WISARD was a thing of soldered circuits, commercial RAM, and a 16x16 matrix of binary pixels. Although it consisted of about a quarter of a million neurons, it remained essentially a bog-standard pattern-recogniser. Its present-day successor, MAGNUS (Multi Automaton General Neural Unit System - the name apparently came first, then they made up the acronym to fit it) runs on a conventional Windows 95 PC and consists of around a million neurons. Significantly, according to Professor Aleksander, MAGNUS thinks in much the same way as a human does. How?

"We put feedback loops on the neurons, so they're all interlinked and talking to one-another, such that each neuron 'knows' what every other one is doing. So when MAGNUS sees an object, it doesn't simply recognise a pattern, like WISARD, but the various states, the essence, or quality of a thing - the 'qualia' as philosophers would describe it. It sees a cup, and it has different processing areas to tell itself that this thing is a cup."

I'll convert that into language even I can understand. Basically, as far as we can tell without actually taking someone's head apart and peeking in, when humans look at an object - a banana, for example - different sets of neurons start sparking simultaneously. One set is dedicated to colour recognition. So this set looks at the banana and registers yellow. Another set of neurons is dedicated to shape recognition. This set registers bendy crescent. Yet another set is dedicated to estimation of dimensions. Twelve inches (when straight), it says. It's the combination of qualia that allow us to identify and categorize objects. Together - yellow, bendy crescent, and 12 inches - cause another set of neurons, dedicated to labelling, to fire. The brain adds them all up and concludes that it's looking at a banana. This is more or less how MAGNUS works, too. When it looks at something, it also uses different processing areas to identify the different qualia. Red, two inches, and round, for instance, allows it to recognize an object as being a tomato. Or beige, oval, and two inches is an egg. And so on.

"This might not sound very impressive," said the Professor, "but it's an essential breakthrough. Philosophers have always said that the ability to recognise the qualia of something is the province solely of conscious beings. And here we have it running on a PC."

Another test of consciousness is the ability to visualise things in your head. To use your imagination, in other words. Professor Aleksander continued: "If I say to you: think of a banana, the chances are, you're thinking of a yellow banana. But if I say: now think of a blue banana with red spots, you can do that, too, no problem. So can MAGNUS. Because it already knows the requisite concepts of shape, colour, and label, it can 'visualise' things it's never actually seen. "

To prove it, I'd recommend you download an evaluation copy of the MAGNUS software from http://www.sonnet.co.uk/nts/. When it's up and running, the program displays a number of windows. The main window shows what MAGNUS is actually looking at; the secondary windows show the different processing areas and how MAGNUS is interpreting the object's qualia. In other words, you can see it "thinking".

Which, at this early stage, is rather like watching an East 17 fan thinking: there's undoubtedly some awareness there, but actually quantifying it could be a major problem. Then again, with only a million neurons, MAGNUS doesn't even come close to the complexity of a human brain, which has around 10 billion. However, Professor Aleksander predicts that, by 2040, we'll have created machines that are as conscious as humans, albeit in the sense of having artificial, rather than biological consciousness. Not only that, they'll have emotions, as well. So the PC of the next millennium will be able to have philosophical debate with you, and tell you where to get off if thinks you're talking bullshit. But, other than as an academic exercise and a way to boost the research grant, what is the point of creating a conscious machine like this?

"It's more to do with having a better human/machine interface than creating a conscious machine for its own sake. We need to build something that can react appropriately according to some previous experience of a changing world. Or possibly even a world that it's never experienced. And what are practical applications? you're asking. People like BT would like a lot more consciousness in their devices. Computer operators, for instance, that can have meaningful dialogues with people who are after information. Or voice-controlled search engines for the Internet that can anticipate the sort of requests you're going to make, and maybe make suggestions, based on their own evaluation of all the information that's available."

And the point of giving it emotions?

"We've demonstrated with MAGNUS that we can artificially duplicate the way in which a human brain visualises and perceives. If we can also duplicate the way in which it feels emotions, then we may have a better understanding of how to treat human afflictions such as, say, depression or schizophrenia. Or, let's say we're building a robot to explore the surface of Mars. If we give it such rudimentary emotions as fear and pleasure, then it will react to potentially dangerous situations by getting scared and moving itself out of harm's way. Or if it sees something that's of great scientific value, it will experience excitement and so approach closer in order to study it."

Supposing it gets a little too excited and a little too paranoid? What of the Professor Kevin Warwick scenario: that by the year 2050, computers will be more intelligent than us and therefore will perceive us as either an irritant or a threat, or both? What are the chances of machine-induced Armageddon?

"No, no, no. Absolute nonsense. Kevin makes a fundamental mistake, because he attributes human consciousness to machines that will only have machine consciousness. The desires to destroy the world and wreak havoc are purely human attributes and ambitions. But machines, because they're not human, simply won't have them. It's like saying to a machine: 'I've got some kippers here, eat them.' The conscious machine will reply: 'I'm sorry - you're making a terrible mistake here. Humans like kippers, machines don't.' Similarly, the thought 'Unleash Armageddon!" is a thing that humans do, but machines don't. Can't."

So basically, yes, we can have a debate with the machines, and they may even call us stupid to our face. But they're not going to destroy us simply because they disagree with us. Any more than, for example, my Word for Windows spellchecker will try to zap me because I always keep rejecting its suggestion that I spell "realize" as "realise" or "offence" as "offense". (Thought I may eventually give Word for Windows a good kicking if it persists.)

According to Igor Aleksander, the future is one of harmony between humans and thinking, conscious machines. Partnership, maybe. But whether they'll ever be able to play Go seems to be a question that still defeats even the experts.

Box: The Rules of Go

Legend has it that the first game of Go was played in Tibet, some 3000 years ago, in order to settle a territorial dispute. The leaders of the two sides had originally intended having a pitched battle. However, someone reminded them that they were both Buddhists, and therefore likely to get re-incarnated as lavatory brushes should they go against their religion's pacifist teachings. Hence the board game alternative, instead.

In a nutshell, Go is played are as follows. You start with an empty board marked with a 19x19 line grid. Each player has an unlimited supply of counters - or stones, as they're called - with one using black stones, the other, white.

Black plays first, and places his stone, not in a square itself, but on one of the lines' intersections. White then does likewise. Once played, the stones aren't moved. The idea is to use the stones to mark out territory by surrounding vacant areas of the board. You can try to scupper the opposing player's attempts at Lebensraum by placing your stone on the intersection immediately adjacent to his. The risk in doing this, however, is that he can then put a stone next to yours. If a stone - or stones - are surrounded on all sides, they're deemed to be captured and removed from the board. At the end, the players count up the number of vacant intersections within their own territory, plus the number of the stones they've taken prisoner. The player with the highest total wins. Unlike chess, games of Go don't usually last very long. It just seems as if they do.

Far a far fuller explanation of the game than I can give here, plus diagrams, point your web browser at: http://www.britgo.demon.co.uk/intro/intro2.html

Why Are Conventional Computers No Good at Go?

The usual way of writing a program that plays board games is to use routines called "evaluation functions." These look at the positions of the various pieces of the board, and then calculate which player, at any point, has the upper hand. This is done by assigning a value to each piece and making a prediction for what will happen x number of moves into the future.

All this is comparatively easy in chess. First, the different pieces have different values. A knight, for instance, is more valuable than a pawn. So if you capture someone's knight, you're doing him more damage than if his knight captures one of your pawns. Also, chess has just 30 legal moves, only ten of which usually make any sense within the context of a specific game. As the average chess game is over in about 80 moves, if you're able to look, say, 15 or 20 moves ahead, which is what Deep Blue can do, you can gain a significant advantage.

A game of Go, on the other hand, can consist of 200 moves, so to gain any significant advantage, you'd have to look maybe 50 or more moves ahead. Even then, however, it isn't a purely tactical affair, like chess. For instance, you can put stones down haphazardly that won't, initially, confer any strategic advantage whatsoever, to anyone. If, indeed, they ever do. Also, the stones are all of equal value, and you can use as many as you need.

Basically, there are a lot more variables in Go, there are few textbook moves, and what might, initially, have seemed like a good move can turn out to be bad, depending on how the game progresses. So a conventional computer program is going to find it difficult to win. What you need is program that "thinks on its feet" and is naturally intuitive. Which tends to suggest some sort of neural network based system, as envisioned by the good Professor Aleksander.

Back to Homepage