Abstract: Practical Aspects of Artificial Intelligence – Tim Josling

team-Tim-JoslingWe are at the beginnings of an explosion in the everyday use of artificial intelligence. After decades of disappointment and frustration, artificial intelligence and machine learning application are appearing everywhere. They are about to change our world in ways that are unpredictable and full of promise and risks. This talk focuses on what we have achieved to date, what is happening now, and the likely path of AI over the next couple of decades. It explains why the pace of past and future progress is surprisingly predictable. Includes a case study on the use of Machine Learning theory in the investment world.

About Tim Josling

Tim Josling studied Law, Anthopology, Philosophy and Mathematics before switching to Computer Science at the dawn of the computer era. He worked on implementing some of the first transactional systems in Australia, later worked on the first ATM networks and was the chief architect for one of the first Internet Banking applications in Australia, and designed an early message switching (“middleware”) application in the USA. During his career he specialised in making large scale applications reliable and fast, saving several major projects from being cancelled due to poor performance and excessive running costs. This led to an interest in the progress of computer hardware and in Moore’s Law, which states that the power of computers grows roughly 10-fold every 5 years. In his spare time he contributed to various open source projects such as the GNU Compiler Collection. After attending the first Singularity Summit in Australia, he decided to retire so he could devote himself full-time to researching Artificial Intelligence, the Technological Singularity and Trans-humanism. He is currently working on applying AI techniques to financial and investment applications.


Marcus Hutter – Universal Artificial Intelligence

Universal Artificial Intelligence

hedbot_aixi_bubble_smallLast year I did a series of interviews with Marcus Hutter while he was down in Melbourne for the Singularity Summit Australia 2012.

Marcus will also be speaking at the [highlight]Science, Technology & the Future conference in Melbourne on Nov 30 – Dec 1 2013 in Melbourne, Australia.[/highlight]

Hutter uses Solomonoff’s inductive inference as a mathematical formalization of Occam’s razor. Hutter adds to this formalization the expected value of an action: shorter (Kolmogorov complexity) computable theories have more weight when calculating the expected value of an action across all computable theories which perfectly describe previous observations.

At any time, given the limited observation sequence so far, what is the Bayes-optimal way of selecting the next action? Hutter proved that the answer is to use Solomonoff’s universal prior to predict the probability of each possible future, and execute the first action of the best policy (a policy is any program that will output all the next actions and input all the next perceptions up to the horizon). A policy is the best if, on a weighted average of all the possible futures, it will maximize the predicted reward up to the horizon. He called this universal algorithm AIXI.

Below is the transcription of the part of the interview series where Marcus talks about intelligence, Bounded Rationality, and AIXI.

What is Intelligence?

marcus hutter - interview with adam ford[dropcap]I[/dropcap]ntelligence is a very difficult concept (maybe that’s the reason why many people try to avoid diluting it or consider more narrow alternatives). I’ve worked on this question for many many years now. We went through the literature; psychology literature, philosophy literature; AI literature – what individuals, researchers, and also groups came up with definitions, they are very diverse. But there seems to be one recurrent theme and if you want to put it in one sentence, then you could define intelligence as:
“an agents ability to achieve goals in a wide range of environments”, or to succeed in a wide range of environments.
Now look at this sentence and ask, “wow, how can this single sentence capture the complexity of intelligence?” There are two answers to this question. First: many aspects of intelligence are emergent properties of intelligence, like being able to learn – if I want to succeed or solve a problem I need to acquire new knowledge, so learning is an emergent phenomenon of this definition.
And the second answer is: this is just a sentence that consists of a few words, what you really have to do, and that’s the hard part, is to transform it into meaningful equations and then study these equations: And that’s what I have done in the last 12 years.

Bounded Rationality

marcus_hutter_singularitysummit_australia_2012_1037x691[dropcap]I[/dropcap]t is an interesting question whether resource bounds should be included in any definition of intelligence or not, and the natural answer is of course they should. Well there are several problems: the first one is that nobody ever came up with a reasonable theory of bounded rationality (people have tried), so it seems to be very hard. And this is not specific to AI or intelligence, but it seems to be symptomatic in science. If you look at the several fields (i.e. the crown physics discipline) theories have been developed: Newton’s mechanics, General Relativity Theory, Quantum Field theory, the Standard Model of Particle Physics. They are more and more precise, but they get less and less computable, and having a computable theory is not a principle in developing these theories, of course at some point you have to test these theories and you want to do something with them, and then you need a computable theory – this is a very difficult issue (and you have to approximate them or do something about it) – but having computational resources built into the fundamental theories, that is at least in physics, and if you look at other disciplines, that is not how things work.
You design theories so that they describe your phenomenon as well as possible and the computational aspect is secondary. Of course if it is in-computable and you can’t do anything with it, you have to come up with another theory, but this always comes second. And only in computer science (and this comes naturally) computer scientists try to think about how they can design an efficient algorithm to solve my problem, and since AI is sitting in the computer science department traditionally, the mainstream thought is “how can I build a resource bounded artificial intelligent system”. And I agree that ultimately this is what we want. But the problem is so hard, that we (or a large fraction of the scientists) should take this approach, model the problem first, define the problem first, and once we are confident that we have solved this problem, then go to the second phase, and try to approximate the theory, try to make a computational theory out of it. And then there are many many possibilities, then you could still try to develop a resource bounded theory of intelligence, which will be very very hard if you want to have it principled, or you do some heuristics… or .. or .. or… many options. Or the short answer maybe I am not smart enough to come up with a resource bounded theory of intelligence, therefore I have only developed one without resource constraints (that would be the short answer).


aixi1line[dropcap]O[/dropcap]k so now we have this informal definition that intelligence is an agent’s ability to succeed or achieve goals in a wide range of environments. The point is you can formalize this theory, and we have done that and it is called AIXI. Or Universal AI is the general field theory and AIXI is the particular agent which acts optimally in this sense.
So that works as follows: it has a planning component, and it has a learning component. What the learning component does is: think about a robot walking around in the environment, and at the beginning it has little or no knowledge about the world, so what it has to do is to acquire data/knowledge of the world and then build its own model of the world, how the world works. And it does that using very powerful general theories on how to learn a model from data, from very complex scenarios. This theory is rooted in Kolomogrov complexity, algorithmic information theory – the basic idea is you look for the simplest model which describe your data sufficiently well. And this agent or robot has to do this continuously, gets new data and updates its model. So now the agent has this model, that is the learning part. Now it can use this model for predicting the future… And then it uses these predictions in order to make decisions, so the agent now thinks if I do this action, and this action… this will now happen and this is good or bad. I’ll come to the good or bad part soon. And if I do this other action it is maybe better or worse. And then the “only” thing what the agent has to do is think about all the potential future action sequences and take the one which is best according to the model which the agent has learned, which is not perfect but which over time gets better and better. Finally you have to qualify what does “best” mean, and that’s the utility part or succeeding: the agent gets occasional reward from the teacher, who could be just a human or the reward could be built in (for instance if the battery level is low it is bad, if it’s high it is good, if it finds a rock on Mars it is good, if it falls down a cliff it’s bad), so we have these rewards, and the goal of the agent is to maximize his reward over it’s lifetime. That’s the planning part. So first comes the learning part, then the prediction part, then the planning part, and then it gets to actions and the cycle continues.
harcus hutter blue backgroundSo this theory, the AIXI agent, it’s mathematically rigorously well defined. It is essentially unique, and you can prove amazing properties of this agent – in a certain sense you can prove that it’s the most intelligent system possible. I am translating the mathematical theorems into words, which is a little tricky but that’s the essence. The downside is that it’s in-computable. You asked before about the resource bounded intelligence so AIXI needs infinite computational resources, and in order to do something with it you need to approximate it, and we have done this in recent years also. At the moment it is at the toy stage so it can play PacMan, Tic Tac Toe, some simple form of Poker, and some other games… The point is not that it is able to play PacMan or Tic-Tac-Toe (they are not hard), the point is that the agent has no knowledge about these games, it starts really blank, and just by interacting with the environment – it does not even know the rules of the game – by interacting with this poker environment or PacMan environment it figures out what is going on, and learns how to behave well.
The cool thing really is and the difference to many other projects (there is Deep Blue who plays chess better than the Grand Masters, but it was systems specifically designed to play chess, and it can’t play go), this system is not tailored to any particular application. If you interface it with any problem (in theory it can be any problem: chess, solving a scientific problem) it will learn to do that very well and indeed optimally. The approximations we have at the moment, are of course, very limited, but if you look at these approximations they use standard compressors for the model learning part; There is nothing about PacMan in these data compressors: they are standard data compressors. For the planning part we use standard Monte-Carlo (random search) which has nothing to do with a particular problem, or a game – and this approximation is already able to learn by itself {these various games}. There is no PacMan knowledge built in. The only thing (of course) you have to do is to interface the game with this agent For PacMan you have these pixels in a 15×15 grid, and each square is a wall, is free, is food or there is a ghost, and this piece of information you give this agent and then it gets negative reward if it gets eaten by a ghost, positive reward if it eats a pallet, and that’s it, and the goal of the agent is to maximize reward, and everything else is figured out by itself.

Video Interviews

For more video interviews please Subscribe to Adam Ford’s YouTube Channel

YoutTube Playlist of Interview Series with Marcus Hutter:

At Singularity Summit Australia 2012 – “Can Intelligence Explode?”


Into the Wild Blue Yonder with Tim van Gelder

Into the Wild Blue Yonder – Tim van Gelder (Who is speaking at the conference this year) – originally posted at H+ Magazine.
defcon-TIM_VAN_GELDER2-620x0[dropcap]I[/dropcap] recently did a [highlight]series of interviews with Tim van Gelder[/highlight] on Intelligence Amplification, Artificial Intelligence, Argument Mapping and Douglas Engelbart’s contributions to computing and user interface design and collective wisdom.
Below the video interview is the article [highlight]Into the Deep Blue Yonder[/highlight].

Tim van Gelder was a founder of Austhink Software, an Australian software development company, and is the Managing Director of Austhink Consulting. He was born in Australia, educated at the University of Melbourne (BA, 1984), the University of Pittsburgh (PhD, 1989), and held academic positions at Indiana University and the Australian National University before returning to Melbourne as an Australian Research Council QEII Research Fellow. In 1998, he transitioned to part-time academic work allowing him to pursue private training and consulting, and in 2005 began working full-time at Austhink Software. In 2009 he transitioned to Managing Director of Austhink Consulting.

Here is one section on the series of interviews:
[youtube url=””] [heading]

Into the Deep Blue Yonder

[/heading] [note]The original article appeared in the late 90s – but it reads very well – and reflects much of Tim van Gelder’s current thinking on AI. A slightly revised version appeared in Quadrant, Xmas 1997. The video interview above covers some similar topics to the article below.[/note] [heading]



tim-elite-2010-13-headThousands of times every day, humans pit their wits against The Machine. On almost every occasion, they lose. Arcade games, bridge programs, pocket chess machines: the phenomenon is so familiar we no longer notice it. We have grown quite accustomed to being outclassed by electronic gadgets in many activities we find intellectually demanding.

In New York earlier this year, a 34-year old Azerbaijanie man sat down to a six-game match against a chess machine. This event, however, galvanised world attention. Chess enthusiasts followed every move by satellite TV or Internet. Newspaper headlines announced the score to millions more. Pundits the world over pontificated on the significance of the occasion.

Why the interest in this match? The Azerbaijanie was Gary Kasparov, the reigning world chess champion, widely regarded as the greatest player in the history of the game. Kasparov is so good that very few players in the world today can even give him a serious game. To keep his form up, he likes to take on entire national teams in “clock simultaneous” matches. In these matches, every player-including Kasparov-has at most 2.5 hours “thinking time.”

On the other side of the board was the latest version of Deep Blue, IBM’s chess-playing computer. Deep Blue is the most powerful chess-playing device ever constructed. The match was billed as the ultimate confrontation of Mankind against The Machine. At stake was more than just Kasparov’s personal pride or IBM’s reputation in computer technology. At stake was more than just the title of best chess player in the known universe. At stake, apparently, was humanity’s self-image as uniquely or supremely intelligent, and hence as entitled to a central or at least special place in the cosmos. At stake also was humanity’s place on the ladder of power and authority. Machines with superhuman intelligence might eventually be able to enslave humans in relentless and efficient pursuit of their alien designs. We remain safe only as long as there are at least some white knights like Kasparov, humans still smarter than any machine.

Of course, the score is now a matter of historical record. Deep Blue won the match narrowly, 3.5 points to 2.5. Fortunately, humanity’s spin-doctors had already prepared a face-saving interpretation of the entire episode. Deep Blue, they countered, is an a mechanistic idiot savant. Kasparov can shrug off his defeat, for the match was no more an interesting contest than pitching a pole-vaulter against a helicopter. Humanity can also breathe a collective sigh of relief and reassurance: we are still the smartest beings in the universe; we can still respect our unique intellectual capacities; we are not about to be subjugated by a new generation of ruthless machines.

These, then, are the two main interpretations of the Kasparov-Deep Blue clash. On one hand there are the alarmists, who see Deep Blue as the vanguard of an approaching army of superhuman intellects. On the other hand are the deflationists, who see Deep Blue as an overgrown and overhyped cash register. Both interpretations read the confrontation in the context of a world-historical competition between Mankind and The Machine. Alarmists see the match as a pivotal moment, one future historians will designate as the occasion upon which both pride of place and the balance of power were ceded to The Machine. Deflationists insist that The Machine is still stupid and Mankind is still safe.

In fact, both these interpretations are mistaken-or rather, misguided. Any interpretation of what may well be an epochal event is built on a foundation of factual and philosophical assumptions; if these are rotten, the edifice is inherently unstable. The situation is even worse when key structural members are fears and fantasies rather than logical implications.

The real significance of the Kasparov defeat is at once more strange and more comforting than either of these simple stories. We are not being superseded by The Machine, but not because The Machine is still a long way behind. Rather, the very distinction between Mankind and The Machine is under pressure. Long before The Machine could be regarded as having overwhelmed us, it will have become us. Ultimately, the loser in this confrontation is not Mankind or The Machine; it is our conception of ourselves as essentially homo sapiens.



hal 9000Early in Stanley Kubrick’s famous movie 2001: A Space Odyssey, the astronaut Dave plays and loses a game of chess against HAL, the spaceship’s intelligent onboard computer. This event, more than the fact that it can control the ship or converse in normal English, demonstrates HAL’s intellectual superiority. As the plot develops it becomes apparent that HAL is out of control, to the point where it has been killing off human astronauts. Its superior intelligence now makes it a highly dangerous opponent.

HAL is a fictional embodiment of the alarmist interpretation of the Kasparov-Deep Blue confrontation. HAL instantiates what alarmists fear Deep Blue might become: a superhuman, general purpose intelligence, self-interested and pitiless. Standing behind this nightmarish vision is a collection of traditional philosophical ideas. Intelligence is regarded as the operation and outcome of Reason, the ability to make inferences in accordance with the principles of Logic. Reason is a specifically human trait, in the sense that members of Homo Sapiens are uniquely or at least supremely rational. It is Reason, more than anything else, which grants humans a special place in the cosmos; it gives them not only the ability, but also the right and duty to organise the world to their own advantage. Chess is the definitive test of intelligence; the winner is always the one with the greatest ability to apply reason in pursuit of its goals. The best chess player is the most intelligent, and therefore the most rational, powerful and privileged, of all beings.

The letters “HAL” immediately precede the letters “IBM” in the alphabet. Some people believe this is no accident; Kubrick chose those letters in order to highlight the danger IBM and corporations like it poses to humanity. This, however, is a myth. “HAL” is derived from “Heuristically programmed ALgorithmic computer.” When Kubrick, who had assistance from IBM in making the movie, found out about the coincidence, he wanted to change the name and was only prevented from doing so by production costs.

2001-dismantlinghalJust as IBM would not wish to be linked with the homicidal HAL, so it has tried to dispel the alarmist interpretation of Deep Blue’s victory. If Mankind had just been humiliated by the Machine, IBM would have to bear responsibility. Being cast as Dr Frankenstein in the public imagination would hardly benefit their corporate image. For this reason IBM is at the forefront of deflationist counter-reactions to the Deep Blue victory. Despite having invested millions of dollars and dozens of expert-years in the project, they are quick to advertise Deep Blue’s limitations. Kasparov, they said, plays with insight, intuition, finesse, imagination. Deep Blue just cranks out billions of possibilities. According to the IBM counter-hype, the real winners in the Kasparov-Deep Blue confrontation are people like you and me. The RS/6000 SP computer driving Deep Blue will be used in traffic control systems, internet applications, and host of other mundane conveniences.

Chess has usually been regarded as the most intellectually challenging game known to man. It would be surprising indeed if a machine could beat the greatest player in history, and yet be fundamentally stupid. That, one is tempted to say, does not compute. That, however, is the position IBM is taking, and one that was echoed recently by none other than Bill Gates.

Two main lines of thought are used to underpin the interpretation of Deep Blue as harmless idiot-savant. The first is the idea that Deep Blue’s move selection is carried out in an utterly mindless fashion. Whereas Kasparov actually thinks about his options, Deep Blue follows pre-ordained rules specifying vast quantities of simple calculations, none of which require the least bit of understanding. This difference is manifested in the number of possible move sequences the players consider before making their moves. Kasparov, like all human chess players, considers only a few dozen or at most a few hundred sequences. Deep Blue considers literally billions of alternatives in a few seconds.

But if good chess is a matter of selecting the best move, and Deep Blue can examine so many more possibilities, how is it that Kasparov is even in the running? According to this line of thought, intelligence is precisely what makes the difference. Intelligence is the magic ingredient which enables Kasparov to recognize the overall board situation, to zero in on relevant features, to attend only to the most plausible lines of play, to look far ahead in the game, to be creative and daring in his play, and to learn from his opponent’s responses. With none of these abilities, Deep Blue is condemned to witless search of all possibilities, no matter how promising. The fact that Deep Blue can beat Kasparov just shows that brute force can sometimes achieve what would otherwise require real thought.

The second line of support considers Deep Blue’s performance in domains other than chess. This argument can be traced all the way back to René Descartes. In his Discourse on Method, Descartes considered how one might distinguish a real person from a sophisticated automaton imitating a person. He proposed two tests. The first is that one should attempt to engage the candidate in conversation. A machine, he argued, would never be able to “arrange words differently to reply to the sense of all that is said in its presence, as even the most moronic man can do.”

The second test is to explore the range of skills the putative person exhibits. Machines can do certain human-like things exceedingly well-witness the animatronic marvels at a place like Disneyland. However they can only do those things because they were specifically designed and constructed for the job. Their design precludes them from doing anything else. For example, we now have machines which are better than humans at shearing sheep, but don’t expect them to knit a woolly jumper or even make a cup of tea. Humans, by contrast, can do a very wide range of things at least tolerably well. That’s because they don’t rely on dedicated machinery; rather, they control general purpose hardware (hands etc.) by means of thought processes.

Descartes believed that the “universal instrument” of Reason is necessary in order to pass both these tests. It is because we can think about the meanings of words that we can hold conversations, and it is because we can think about our actions that we can do so many different kinds of things.

Deep Blue, of course, immediately fails Descartes’ tests. It cannot even play checkers, let alone walk the dog or hold a conversation. Deflationists conclude that Deep Blue has exactly zero genuine intelligence, even though it plays the best chess in the world. Indeed, the two lines of thought come together: it is because Deep Blue plays chess without really thinking that it can do nothing other than checkmate its opponents.



These deflationary arguments certainly undermine the simple alarmist view that Deep Blue is the first of a new generation of superhuman intellects poised to enslave the human race. They do not, however, establish that Deep Blue is a witless moron. More careful consideration of the nature of chess, and the machines which play it, supports the commonsense view that Deep Blue does indeed have at least some measure of intelligence.

chess worldChess is what is known as a formal system. Every board position and every move is well-defined and unambiguous, as are the starting and finishing positions. Further, chess is completely self-contained; nothing outside the board has any relevance to the game. Playing good chess means making a sequence of moves ending in checkmate for the opponent. The hard part, of course, is picking the right move at any given time. The typical number of moves available from any given position is about 35. Whether a move is a good one depends on what the next move of the opponent might be, your response, and so forth. A good player can tell which of these possible sequences of moves and countermoves is advantageous, and hence which of the 35 moves to select.

All a chess machine needs to do, then, is to examine all the available move-countermove sequences, and select one ending in checkmate for the opponent. Unfortunately, this simple strategy is completely out of the question (at least, for any technology currently imaginable). The fundamental problem is that of combinatorial explosion. It is illustrated by the following puzzle. Imagine folding a normal sheet of paper in half. The remaining “pile” is twice as thick as the original sheet. Continue until you have folded it 100 times. How thick is the pile now? Most people estimate a few yards. In fact, the pile would stretch eight hundred thousand billion times the distance from the earth to the sun (give or take a few trillion miles).

Combinatorial explosion affects chess just as dramatically. The number of possible move sequences increases exponentially with each “ply” (move), and before long exceeds such familiar measures of enormity as the number of particles in the universe or the number of seconds since the beginning of time. This prevents any conceivable machine from playing good chess simply by mindlessly searching the branching tree of move sequences.

The real secret to good chess is not being able to consider vast quantities of move sequences (though that helps). Rather, the secret is being able to ignore the overwhelming majority of sequences, and focus attention on those relatively few which have some real promise. But how do you tell in advance which sequences to ignore? How do you prune from the search tree branches you haven’t even looked at?

The answer, basically, is that you use what computer scientists call “heuristics”-rules of thumb providing reliable, though not infallible guides. For example, a handy rule in finding checkmates is to examine first those moves that permit the opponent the fewest replies. Heuristics are distillations of considerable experience with the domain. At one level, a computer must always be programmed to “blindly” follow algorithms telling it exactly what to do and how to do it. At another level, however, those algorithms can embody heuristics guiding the computer in producing sophisticated-even “thoughtful”-behaviour.

Deep Blue, like all chess computers, operates by means of heuristically-guided search. Its power results from two factors. On one hand, it is an enormously fast search engine. Its 256 specially-designed processors can consider almost a quarter of a billion moves every second; in a game it will examine trillions of possibilities before making a move. On the other hand, and even more importantly, its software embodies a vast amount of real chess knowledge encoded in the form of heuristics. The team of experts who spent years refining Deep Blue’s understanding of chess included an international grandmaster. Almost every match Kasparov has played in the last twenty years has been recorded; Deep Blue is intimately familiar with Kasparov’s game.

Therefore, the image of Deep Blue as a prodigiously powerful but essentially stupid “number cruncher” is seriously deficient. Deep Blue embodies a great deal of human-derived chess knowledge, and puts that knowledge to good use in choosing intelligently. Indeed, Deep Blue has to be that way; the problem of combinatorial explosion prevents any simple brute-force machine from playing good chess, at least for the foreseeable future.

An interesting consequence is that, as computers have reached the very top levels, their style of play has become more “human.” For example, “trappy” moves-ones that gently coax an opponent into an apparent position of strength, but hold a sting many plies down the road-were once a human specialty. These days, with real chess knowledge guiding their search patterns, computers not only avoid traps, they set them themselves. Kasparov himself is no longer able to say, reliably, whether an opponent is human or machine just by looking at the moves. (HAL, by the way, played chess that was quite “human” in style. This was no coincidence; the game in the movie was transcribed from an obscure match played in Hamburg in 1913 .)

Deep Blue, then, does have intelligence. It plays a mean game of chess, and does so by thinking about its moves. There are still, to be sure, some significant differences between Kasparov’s thought processes and those of Deep Blue. Both, however, are thinking, and the outcome is the same.



If this is right, Descartes’ tests cannot be regarded as decisive. There can be genuine intelligence even in the absence of conversation or a wide range of skills. However, Descartes was clearly onto something important. If Deep Blue is so smart, why is it restricted to chess? Why can’t it talk about the football?

The deep reason-one of the most important discoveries of cognitive science-is that there are in fact many kinds of intelligence: diverse domains in which intelligence can be achieved, and various ways to achieve it. Some theorists have distinguished as many as seven different categories of intelligence, but the most important distinction for current purposes is that between what we can call formal intelligence, on one hand, and common sense on the other.

11119747-human-brain-intelligence-grunge-machine-medical-symbol-with-old-texture-made-of-cogs-and-gears-repreFormal intelligence is that required for domains which, like chess, are formal systems. Such domains might be hugely complex, but they are fundamentally well-defined and self-contained. Common sense is intelligence in domains not satisfying these conditions. Here there is no simple way to specify what the options are, and no way to draw boundaries around what might be relevant. Conversing is the classic example. What do you say when someone says “How are you doing”? Well, that depends-on who said it, in what tone of voice, where they were, what time it was. Try writing a complete set of rules for just the second line of a perfectly ordinary conversation and you’ll find out just how much common sense ordinary people actually exhibit.

The difference between formal intelligence and common sense is illustrated by the contrast between formal logic and its informal counterpart. Formal logic is manipulation of symbolic structures in accordance with strict rules. At elementary levels it is a dull, even “mindless” activity (though still a difficult skill for many people to pick up); at advanced levels, it is quite creative. It has been relatively easy to program computers to perform in this domain, though the best logicians are currently still humans.

Informal logic, on the other hand, is a matter of determining when somebody is justified in making some assertion. Would further reductions in tariff barriers lead to further unemployment? A great deal of evidence can be brought to bear, but there are no algorithmic procedures for determining whether the conclusion follows. For centuries, philosophers harboured the misconception that formal and informal logic are, deep down, the same thing-that all informal reasoning is just a complicated version of predicate calculus. More recently it has become apparent that informal logic requires a great deal of “nous,” and there is no easy way to translate that into rule-governed symbol manipulation.

Formal intelligence and common sense are both varieties of intelligence; they are both a matter figuring out what you should do to achieve your goals within a certain domain. However, they are very different, and they do not easily adapt to each other’s roles. On one hand, ordinary people have buckets of common sense (well, most of them, most of the time), but they are inept at chess, mathematics, formal logic, etc.. On the other hand, formal intelligence doesn’t automatically provide common sense. There is, of course, the stereotype of the absent-minded physics professor. More seriously, Deep Blue can’t do the weekly shopping and there is no simple way to adapt its prodigious formal intelligence to that apparently elementary task.

Traditional artificial intelligence-the science and engineering of smart computers-has grappled with both kinds of intelligence. Its successes in formal domains has been matched by a notable lack of success at reproducing common sense. The standard approach has been to attempt to translate the informal domain into an approximately commensurate formal system. Unfortunately, this enterprise is at least extraordinarily difficult, and perhaps impossible. There are some research projects around the world grappling with the problem, but don’t hold your breath.

From this perspective, Deep Blue’s victory does signify something important about artificial intelligence: namely that, as one expert put it, the easy (formal) part is now almost over, and the real work is just beginning. Computers are reaching superiority in a kind of intelligence which is rather difficult for humans to achieve. However, they are barely at first base with regard to the kind of intelligence humans find entirely natural-negotiating their way around the everyday world.



KASPAROVThus far, I have argued that neither the simple alarmist interpretation, nor the simple deflationist reaction, can be sustained. Deep Blue is not a superhuman intellect, but neither is it just a cash-register on steroids. It is an enormously sophisticated machine exhibiting a significant measure of intelligence in one formal domain, and none in all others. Until computer scientists can solve the far more difficult problem of common sense intelligence, machines will remain our intellectual inferiors and subject to our dominion.

Is this likely to happen, and if so, when? Some philosophers have claimed will always be impossible for digital computers to exhibit any significant degree of common sense. Hubert Dreyfus of the University of California at Berkeley is the most important of this group. He has provided powerful arguments that common sense depends upon vast quantities of everyday knowledge and know-how which can never be fully articulated in a form useable by digital computers.

Such predictions, however, are inherently risky, for they depend on our current levels of understanding of the nature of the problem and the limits of technology. Meanwhile, many researchers are tackling various aspects of the problem and making what counts as, at the very least, piecemeal progress on the fringes. The most famous of these efforts is the “CYC” project pioneered by Doug Lenat. The goal here is to “upload” the entirety of human commonsense knowledge into a vast electronic encyclopedia ready for use by other programs. The CYC people claim to already have commercial applications up and running.

My own opinion is that researchers in artificial intelligence will, most likely, eventually succeed in solving the problem of commonsense intelligence. It will not be anytime soon. Cracking the chess nut took about four decades longer than originally predicted. In the meantime, we’ve come to understand that chess was the easy problem. Common sense may well take centuries. Alan Turing, the father of artificial intelligence, predicted in 1950 that by the end of the century-that is, by around now-we would have machines able to converse at pretty convincing levels. No such luck. You can, if you like, interact over the internet with the best “conversation” machines in the world today. The experience is sure to impress upon you the difficulty of programming a computer with common sense. Nevertheless, progress is being made. The goal-genuine intelligence on tap-is so valuable that vast resources and ingenuity will be thrown at it over the next few hundred years. My money, for what it is worth, is on the side of the computer engineers.

kasparov-deepblueIn the case of chess, truly excellent levels of play were only achieved once scientists had developed sufficient understanding of how humans manage to play the game so well, and figured out how to transfer some of that understanding into the computer’s design. Deep Blue’s intelligence was thus largely a matter of human intelligence, abstracted out and reimplemented in digital hardware. The same will be true in the case of common sense. Constructing computers which hold conversations will only be possible once we understand much better what it is that an ordinary person knows, and how that knowledge is organised, accessed and updated. Once these problems in cognitive science have been solved, the computer scientists will face the challenge of building electronic instantiations of the same principles.

In other words, artificial intelligence succeeds in part through mimicry. It produces silicon simulacra of the basic principles underlying human intelligence. This is because the fundamental requirements of intelligent performance are universal; what varies are their implementations in different kinds of hardware. Evolution developed in humans a neurobiological implementation of the solution to the problem of common sense intelligence. Artificial intelligence will develop an alternative implementation of what is, at the relevant abstract level, the same solution.



Suppose this is correct. Suppose that in fifty years or so computer scientists have succeeded in producing, say, an automatic personal banker. You dial the bank on your videophone and are connected to a virtual “talking head,” a kind of supersmooth version of Max Headroom. You interact with this artificial persona just as you would with an ordinary human being. The conversation is quite intimate; your banker has a name, a personality, and knows quite a bit about you from the bank’s files and your previous interactions. As long as you don’t stray too far from the world of deposits, balances, and mortgages, the illusion that you are interacting with a flesh-and-blood human will be overwhelming.

15_robottyper_lglNow for the critical question-is this personal banker human or machine? More generally, will artificial intelligence be producing artificial humans, or just machine intelligence? At one extreme there is the hard-line view that nothing can really be human unless it is homo sapiens, i.e., shares our own evolutionary ancestry and our biological incarnation. According to this position, no matter how sophisticated these systems become, they will always be mere machines, imitating but never instantiating human nature. At the other extreme there is the ultra-liberal view that membership of homo sapiens is at best an accident of history, and has no essential connection to one’s social and ethical status as human. It took many centuries, but in the West at least we finally arrived at the enlightened view that the borders of human kind have nothing to do with those of gender and skin colour. Some people now argue that we should extend these borders even further to include dolphins and other putative intelligentsia. The point is that recognition as “one of us,” with attendant rights and responsibilities, should depend not on arbitrary details of one’s history or embodiment but on one’s capacity to participate in human forms of life. Taken to its logical limit, this view would extend the privilege of human status even to programmed computers.

The philosophical choice between hard-line biologism and a more catholic liberalism is not an easy one, and I don’t intend to adjudicate the matter here. The point of interest is that artificially intelligent machines participating in human forms of life are the kind of case which put pressure on the seemingly simple distinction between Mankind and The Machine. For most of the industrial age, the distinction was obvious enough-people were flesh and blood, born of woman, rational and emotional, social and spiritual. Machines were metal and electricity, born of the workshop, cold and insensitive. The utterly alien character of traditional machines made it easy to see the relationship between Man and Machine as one of opposition and perhaps competition. This attitude of “them against us” is still with us even in the age of information technology. Thus the Kasparov-Deep Blue match is cast as a critical episode in a kind of cosmic struggle to the death between humanity and the emerging machine.

By the time computers have been programmed with common sense, the contrast between Mankind and Machine will have become blurred, if not entirely overthrown. Computers which match our everyday forms of intelligence, and achieve this precisely because they recapitulate the basic principles underlying our own intelligent behaviour, have become very much like us. It will not be easy, either psychologically or philosophically, to draw a rigid distinction between people and PCs. Of course, it will always be possible to doggedly maintain that human nature is essentially a matter of lineage or embodiment, and to distribute rights and privileges accordingly. As philosopher Robert Brandom remarked, “”We” is said in many ways.” There is an unavoidable element of arbitrariness in deciding that “we” will stop at the boundary of our species. Many will choose to draw the boundaries somewhat differently, and in the process revise the very concept of humanity.

I am suggesting that machines will never outperform humans in an intelligence contest. By the time any such confrontation could conceivably come about, the conceptual contrast between human and machine, upon which the apparent interest of the contest depends, will have been drastically revised. Computers with common sense will not be humans, in the ordinary sense of today. Neither, however, will they be just machines, in the ordinary sense of today. They will be a wholly new entrant onto the ontological stage, displacing forever the current constellation of concepts in terms of which we contemplate our place in the world. The irresistible onwards march of information technology will not produce machines superior to humans. Rather, it will overhaul our understanding of what we are and what machines are. It will replace a binary opposition with a rich spectrum of manifestations of intelligence, and a correspondingly rich range of ways of determining who or what counts as one of “us.”

Deep Blue’s victory over Kasparov was the first major public triumph of artificial, programmed intelligence over evolved biological intelligence. It was indeed an event of world-historical significance. Not, as the alarmist fears, because it signifies the arrival of intelligent machines as potential competitors to humanity. Rather, it is significant because it is the first major milestone in a long process of transformation of human self-understanding-and hence human being. If we see history in Hegelian terms, as a series of stages in the evolution of the spirit or self-consciousness, Deep Blue’s victory lies at the cusp of a new era. Our own mastery of technology, and our level of scientific self-understanding, is reaching the stage where we can recreate aspects of ourselves in non-biological form, and in the process dramatically transform our understanding of what we essentially are. As Kasparov himself put it:
[quote]Maybe the highest triumph for the Creator is to see his creations re-create themselves.[/quote] [box title=”Video Interviews”]

For more video interviews please Subscribe to Adam Ford’s YouTube Channel