Turing 1950

From RB Wiki
Revision as of 15:49, 3 February 2020 by Lê Nguyên Hoang (talk | contribs) (→‎Final remarks)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In 1950, Turing published Computing Machinery and Intelligence Turing50. This seminal paper introduced numerous fundamental ideas of artificial intelligence and machine learning.

The structure of the paper

The paper contains 7 sections. Section 1 introduces the imitation game, now known as the Turing test. Section 2 explores the interestingness of the Turing test, and shows, for instance, that being reasonably good at chess is necessary to pass the Turing test.

Sections 3, 4 and 5 (re)define computing machines, according to what's now known as the (universal) Turing machine.

Section 6 replies to 9 classical objections to the possibility of human-level AI, which we discuss further below.

Section 7 introduces a roadmap to designing AI. Amazingly, Turing argues for the need of machine learning, with a formidable argument based on Kolmogorov-Solomonoff complexity.

Turing's replies to objections to human-level AI

Section 6 of Turing's paper rejects the 9 classical objections to human-level AI (or rather, algorithms able to pass the Turing test). Below, we list these objections.

1. Theological objection

This argument is based on the special role given to humans by a divine creator, such as the soul given to humans only. Turing criticizes the denial of animals' souls, as well as the failure of theological arguments in science, with the example of Galileo and Copernic.

2. 'Heads in the sand' objection

Turing argues that many arguments against human-level AI are motivated reasoning (see cognitive bias).

We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position. The popularity of the theological argument is clearly connected with this feeling. It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.

3. Mathematical objection

The mathematical objection is based on impossibility theorems like Gödel's incompleteness theorem or Turing's own halting problem (now, we could add Solomonoff's incompleteness theorem). Turing argues that the main issue with this objection is that it applies to humans too. If some theorem is undecidable, it would be pretentious for a human to claim it is true for sure, even though it cannot be proved.

Although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect. But I do not think this view can be dismissed quite so lightly.

Some critics sometimes point out that, while some theorems are undecidable, humans can prove the undecidability of the theorem by "looking from the outisde of the theory". True. But such a meta-mathematics could also be done by a machine. In fact, Turing's theorem of the existence of universal Turing machine means that any machine can very much study any other study "from the outside". Including itself.

4. Consciousness objection

The consciousness objection essentially says that while machines can sometimes do X, they do not know they are doing X. This has later been rephrased as the Chinese room thought experiment by Searle80.

Turing replies that if one really wanted to reject the possibility that objects different from us can think, then Turing argues that one should consider the possibility that all humans cannot think but oneself. Turing argues that such a solipsist view is not what defenders of the consciousness objection would usually accept. Turing's reply seems akin to the computable moral philosophy argument by HoangElmhamdi19FR.

At least in the case of the Turing test, Turing argues, such objections are irrelevant.

I think that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position. They will then probably be willing to accept (the Turing test).

5. Argument from various disabilities

This argument consists in arguing that machines will never be able to do X, for different values of X usually invented after observation that, surprisingly, machines have successfully achieved Y. This argument John McCarthy's (posterior) quote: "as soon as it works, no one calls it AI anymore".

Turing argues that such replies are mostly due to overfitting on past or present data. Yet, he argues that future machines will have much greater storage capacity which will allow much more impressive capabilities.

A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions (...) Naturally he concludes that (limitations of today's machines) are necessary properties of machines in general. Many of these limitations are associated with the very small storage capacity of most machines.

The criticism that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity. Until fairly recently a storage capacity of even a thousand digits was very rare.

6a. Lady Lovelace's objection

Ada Lovelace is the first programmer in history. She had an amazing insight into the nature of computation. Yet, in her seminal memoir describing Babbage's Analytical Engine, she wrote: "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform" (her italics). In other words, machines can only do what we tell them to do.

Turing retorts by quoting Douglas Hartree, who introduced the possibility of machine learning. Turing then refers to Section 7 of his paper, where this learning is further discussed.

6b. Variants of Lovelace's objection

A variant of Lovelace's objection is that machines cannot do something new, or cannot give rise to surprise. Turing argues that this is strongly unvalidated by his own experience with machines.

Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.

More interestingly, Turing points out that we underestimate the difficulty of predicting the outcome of long computation — especially intellectuals. This is sometimes known as the lack of logical omniscience.

The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles.

7. Argument from Continuity in the Nervous System

This argument lies in the continuity of components of the nervous system, as opposed to the discreteness of the Turing machine. Turing retorts that this does not seem to provide much advantage to pass the Turing test though. Moreover, discrete approximations are likely to be sufficient.

Back in Turing's days, this argument might have seen compelling. But these days, we are surrounded by multimedia contents that feel very continuous to us, even though they are actually discrete binary information. This arguably makes this argument much less convincing than it might have been in the past.

8. Argument from Informality of Behaviour

It is often claimed that human behavior is too complex to be encoded in a set of rules to be followed; yet computers can only follow rules.

To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree.

This quote is not surprising, given that Turing would then defend learning as opposed to code-writing as the approach to create human-level AI. But interestingly, Turing also points out that, the "laws of behavior" of a machine whose code is not known to us are in fact just as hard to formalize as those of humans.

I have set up on the Manchester computer a small programme using only 1000 units of storage, whereby the machine supplied with one sixteen figure number replies with another within two seconds. I would defy anyone to learn from these replies sufficient about the programme to be able to predict any replies to untried values.

9. The Argument from Extra-Sensory Perception

Weirdly, Turing takes the argument from humans' capabilities of telepathy, clairvoyance, precognition and psycho-kinesis seriously.

These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.

It is not clear to Lê what "statistical evidence" Turing is referring to. Turing adds that "this argument is to my mind quite a strong one". However, today, the evidence for the existence and precision of extra-sensory perception seems limited and disputable, though not inexistent RationallySpeaking53 S4A19fr.

Turing's learning machines

The most exciting section of Turing's 1950 paper is probably Section 7, where he introduces machine learning. In particular, he provides a wonderfully compelling argument for the need of machine learning to achieve human-level AI.

The argument for machine learning

The argument relies on what would now be called the Solomonoff-Kolmogorov complexity of the Turing test.

The problem is mainly one of programming (...) Estimates of the storage capacity of the brain vary from 1010 to 1015 binary digits. I incline to the lower values and believe that only a very small fraction is used for the higher types of thinking. (...) I should be surprised if more than 109 was required for satisfactory playing of the imitation game, at any rate against a blind man. (Note—The capacity of the Encyclopaedia Britannica, 11th edition, is 2×109.)

Note that we now know that the brain capacity is closer to 1014, which suggests that it should increase Turing's estimate by something like 103 (amazingly, this also explains why Turing can be argued to have been slightly overconfident of the rate of progress towards human-level AI).

Turing's key argument is then that the code of a human-level AI would be humanly near-impossible to write.

At my present rate of working I produce about a thousand digits of programme a day, so that about sixty workers, working steadily through the fifty years might accomplish the job, if nothing went into the waste-paper basket. Some more expeditious method seems desirable.

This "more expeditious method" is evidently machine learning.

Components of machine learning

In the process of trying to imitate an adult human mind we are bound to think a good deal about the process which has brought it to the state that it is in. We may notice three components,

(a) The initial state of the mind, say at birth, (b) The education to which it has been subjected,

(c) Other experience, not to be described as education, to which it has been subjected.

Amazingly, Turing introduces here three key components of machine learning, which would now be called (a) the learning algorithm, (b) supervised learning and (c) unsupervised learning. And crucially, Turing argues that the learning algorithm may be simple enough to be written by humans.

Presumably the child-brain is something like a note-book as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets (...) Our hope is that there is so little mechanism in the child-brain that something like it can be easily programmed.

Algorithms for machine learning

Turing then makes an analogy between human learning and natural selection, thereby introducing what we now call a genetic algorithm for machine learning. But interestingly, he also suggests that this method will likely not be fast enough for efficient learning.

The survival of the fittest is a slow method for measuring advantages. The experimenter, by the exercise of intelligence, should be able to speed it up. Equally important is the fact that he is not restricted to random mutations.

The more efficient approach Turing proposes is what we would now call reinforcement learning.

We normally associate punishments and rewards with the teaching process. Some simple child-machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment-signal are unlikely to be repeated, whereas a reward-signal increased the probability of repetition of the events which led up to it.

He also argues that the reinforcement learning should not be based on receiving positive and negative feedbacks only.

It is necessary therefore to have some other ‘unemotional’ channels of communication. If these are available it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g. a symbolic language. These orders are to be transmitted through the ‘unemotional’ channels. The use of this language will diminish greatly the number of punishments and rewards required.

Final remarks

There's also a quite Bayesian remark on the need for uncertainty of the algorithm.

Processes that are learnt do not produce a hundred per cent certainty of result; if they did they could not be unlearnt.

Turing then makes a brief remark on the first problems to be tackled by AI research, with an interesting modesty on the hardness of "abstract activity" like chess, as opposed to more every-day tasks.

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.

Turing concludes by encouraging AI research.

We can only see a short distance ahead, but we can see plenty there that needs to be done.

Other quotes

I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning.

Note that this does not refer to solving the Turing test. It would only mark a step towards completion of the Turing test. Turing did not predict the resolution of the Turing test by 2000.

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.

In the paper, Turing argues against the need of "embodied intelligence".

(The machine) will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes (...) We need not be too concerned about the legs, eyes, etc.