Convexity

From RB Wiki
Revision as of 10:10, 11 February 2020 by Lê Nguyên Hoang (talk | contribs) (Created page with "A convex set is one such that the segment between any two points of the set still belongs to the set. In other words, if <math>x,y \in S</math> and <math>\lambda \in [0,1]</ma...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

A convex set is one such that the segment between any two points of the set still belongs to the set. In other words, if [math]x,y \in S[/math] and [math]\lambda \in [0,1][/math], then [math]\lambda x + (1-\lambda) y \in S[/math]. A convex function [math]f[/math] is such that [math]f(\lambda x + (1-\lambda) y) \leq \lambda f(x) + (1-\lambda) f(y)[/math]</math>. Put differently, the image of the average is below the average of the images.

Convexity play a central role in optimization, because optimization of convex functions over convex sets can be done efficiently, for instance using (variants of) gradient descents. Yet, weirdly, though neural networks learning is not a convex problem, it has become the most successful approach for machine learning.

Examples of convex optimization

MSE linear regression, cross-entropy, SVM, logistic regression.

Neural networks and convexity

JHG18 showed that, in the infinite-width limit, neural networks could be regarded as a convex optimization of neural networks in the functional space ZettaBytes19.

On another note, the recent discovery of double descent ZBHRV17 NKBYBS19 suggest that overparameterized neural networks are better at learning and generalization. It may or may not be that the learning of overparameterized neural networks is "nearly convex". A conjecture that we might make is that any random initialization of a very overparameterized neural network is, with high probability, in a convex region where the loss function of the learning problem is convex, and where its minimum is zero.