Machine learning

From RB Wiki
Revision as of 12:17, 27 January 2020 by Lê Nguyên Hoang (talk | contribs) (→‎Empirical observations)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Machine learning is the idea of letting algorithms write (or update) their own codes. It is at the heart of recent breakthroughs in computer science, especially in image analysis, speech recognition, natural language processing, but even problem solving [cite AlphaFold, deep learning for symbolic math]. Perhaps most crucially, there are strong arguments to suggest that it will allow further spectacular progress in the coming years.

Turing's argument

The most compelling case for machine learning is arguably given in Section 7 of Turing 1950 paper. Here's a brief version of this argument.

Performant algorithms will require very complex codes. Human-level AI in particular may require to be roughly as complex as the human brain, whose number of synapses is estimated at [math]10^{14}[/math]. In other words, the code of a human-level AI will likely be thousands of billions of lines long, if not more.

But code-writing is a tedious and slow endeavor. Humans won't be able to write such a complex and long code. As a result, only algorithms can write the code of a human-level AI (or of an algorithm able to detect cats in images). This is the core principle of machine learning: letting algorithms write their own codes.

But to do so, they will need to rely on external inputs (this can be justified in terms of Kolmogorov-Solomonoff complexity if we ignore computational complexity). Therefore, performant algorithms will likely consist of algorithms exploiting external data to write the codes of the performant algorithms.

This is machine learning.

Empirical observations

Sutton19 argues that this is indeed what we have been observing over the last few decades. Cleverly crafted algorithms for numerous tasks have been spectacularly outperformed by machine learning algorithms.

More data beats clever algorithms, but better data beats more data. (Peter Norvig)

When a large language model is trained on a sufficiently large and diverse dataset it is able to perform well across many domains and datasets [...] High-capacity models trained to maximize the likelihood of a sufficiently varied text corpus begin to learn how to perform a surprising amount of tasks without the need for explicit supervision. RWCLAS19

Supervised, unsupervised and reinforced

There are 3 main forms of learning.

What makes machine learning safety hard

Can't apply formal verification!

Machine learning is data and goal-driven!

This means that we should care about quality data collection, but also on objective function design. It has been argued that the latter should follow the principle of alignment.