How mathematicians can contribute
Robustly beneficial decision-making raises numerous technical problems, especially in terms of provable guarantees. Such questions may be of interests to mathematicians.
The most relevant mathematics to AI ethics is probably statistics. In particular, it seems critical to better understand Goodhart's law and robust statistics. Recent work has yielded remarkable provable guarantees DepersinLecué19, while empirical findings suggest a need for a new theory of overfitting ZBHRV17.
The reinforcement learning framework provides numerous mathematical challenges. One area already widely investigated is (convex) online learning. The study of AIXI and variants is also a relevant area of study with lots of nice unsolved mathematical conjectures.
The research on neural networks has attracted significant interest from mathematicians since the early days of connectionism. For instance, functional analysis in the late 1980s and early 1990 has been instrumental to prove the universal approximation theorems. Roughly speaking, these theorems state the space of functions that are generated by neural networks are dense in the space of borel-measurable functions and as such, neural networks could approximate any computable function up to arbitrary precision.
Recently, mathematicians studying neural networks have focused on generalisation bounds of neural networks. Some insightful mathematics studying neural networks has been derived by taking the infinite-width limit JHG18. Remarkably, it was found that the learning of the neural networks could be described as a convex kernel gradient descent in the function space ZettaBytes18.
There may be results to investigate about the application of algebraic topology to distributed computing HKRBook13. This would be relevant as today's algorithms rely more and more on distributed learning.
Related to this, there is interesting questions around the generalization of Bayesian agreement. For instance, are there upper and lower bound on agreement in Byzantine Bayesian agreement?
Contributions from game theory, in particular from social choice theory could inform better collective decision making that is needed for robustly beneficial algorithms. A classic example is the one of moral preferences aggregation.
Tensor calculus, SVD, linear-algebraic formulation of ML…
Optimisation and multi-variate calculus
The training mechanisms that are involved in machine learning require the development of new tools in (continuous) optimisation and multi-variate calculus. Examples of developments in theses field that were due to machine learning research include coordinate-descent, 1-bit gradient descent, gradient descent with momentum