Difference between revisions of "How mathematicians can contribute"

From RB Wiki
 
(4 intermediate revisions by the same user not shown)
Line 9: Line 9:
 
== Neural networks ==
 
== Neural networks ==
  
Recently, insightful mathematics studying neural networks has been derived by taking the infinite-width limit [http://papers.nips.cc/paper/8076-neural-tangent-kernel-convergence-and-generalization-in-neural-networks.pdf JHG][https://dblp.org/rec/bibtex/conf/nips/JacotHG18 18]. Remarkably, it was found that the learning of the neural networks could be described as a convex kernel gradient descent in the function space [https://www.youtube.com/playlist?list=PL95IbaKDuVrI_i5u8swlCLcr-4tX2SDxI ZettaBytes18].
+
The research on neural networks has attracted significant interest from mathematicians since the early days of connectionism. For instance, functional analysis in the late 1980s and early 1990 has been instrumental to prove the [https://en.wikipedia.org/wiki/Universal_approximation_theorem universal approximation theorems]. Roughly speaking, these theorems state the space of functions that are generated by neural networks are dense in the space of borel-measurable functions and as such, neural networks could approximate any computable function up to arbitrary precision.
 +
 
 +
Recently, mathematicians studying neural networks have focused on generalisation bounds of neural networks.
 +
Some insightful mathematics studying neural networks has been derived by taking the infinite-width limit [http://papers.nips.cc/paper/8076-neural-tangent-kernel-convergence-and-generalization-in-neural-networks.pdf JHG][https://dblp.org/rec/bibtex/conf/nips/JacotHG18 18]. Remarkably, it was found that the learning of the neural networks could be described as a convex kernel gradient descent in the function space [https://www.youtube.com/playlist?list=PL95IbaKDuVrI_i5u8swlCLcr-4tX2SDxI ZettaBytes18].
 +
 
 +
 
  
 
== Algebraic topology ==
 
== Algebraic topology ==
Line 19: Line 24:
 
== Game theory ==
 
== Game theory ==
  
Contributions from game theory, in particular from social choice theory could inform better collective decision making that is needed for robustly beneficial algorithms. A classic example is the one of moral preferences aggregation.
+
Contributions from game theory, in particular from [[Social_choice|social choice theory]] could inform better collective decision making that is needed for robustly beneficial algorithms. A classic example is the one of moral preferences aggregation.
 +
 
 +
== Linear algebra ==
 +
 
 +
Tensor calculus, SVD, linear-algebraic formulation of ML…
 +
 
 +
== Optimisation and multi-variate calculus ==
 +
 
 +
The training mechanisms that are involved in machine learning require the development of new tools in (continuous) optimisation and multi-variate calculus. Examples of developments in theses field that were due to machine learning research include coordinate-descent, 1-bit gradient descent, gradient descent with momentum

Latest revision as of 14:42, 27 January 2020

Robustly beneficial decision-making raises numerous technical problems, especially in terms of provable guarantees. Such questions may be of interests to mathematicians.

Statistics

The most relevant mathematics to AI ethics is probably statistics. In particular, it seems critical to better understand Goodhart's law and robust statistics. Recent work has yielded remarkable provable guarantees DepersinLecué19, while empirical findings suggest a need for a new theory of overfitting ZBHRV17.

The reinforcement learning framework provides numerous mathematical challenges. One area already widely investigated is (convex) online learning. The study of AIXI and variants is also a relevant area of study with lots of nice unsolved mathematical conjectures.

Neural networks

The research on neural networks has attracted significant interest from mathematicians since the early days of connectionism. For instance, functional analysis in the late 1980s and early 1990 has been instrumental to prove the universal approximation theorems. Roughly speaking, these theorems state the space of functions that are generated by neural networks are dense in the space of borel-measurable functions and as such, neural networks could approximate any computable function up to arbitrary precision.

Recently, mathematicians studying neural networks have focused on generalisation bounds of neural networks. Some insightful mathematics studying neural networks has been derived by taking the infinite-width limit JHG18. Remarkably, it was found that the learning of the neural networks could be described as a convex kernel gradient descent in the function space ZettaBytes18.


Algebraic topology

There may be results to investigate about the application of algebraic topology to distributed computing HKRBook13. This would be relevant as today's algorithms rely more and more on distributed learning.

Related to this, there is interesting questions around the generalization of Bayesian agreement. For instance, are there upper and lower bound on agreement in Byzantine Bayesian agreement?

Game theory

Contributions from game theory, in particular from social choice theory could inform better collective decision making that is needed for robustly beneficial algorithms. A classic example is the one of moral preferences aggregation.

Linear algebra

Tensor calculus, SVD, linear-algebraic formulation of ML…

Optimisation and multi-variate calculus

The training mechanisms that are involved in machine learning require the development of new tools in (continuous) optimisation and multi-variate calculus. Examples of developments in theses field that were due to machine learning research include coordinate-descent, 1-bit gradient descent, gradient descent with momentum