Difference between revisions of "How mathematicians can contribute"

From RB Wiki
m
m
Line 3: Line 3:
 
== Statistics ==
 
== Statistics ==
  
The most relevant mathematics to AI ethics is probably statistics. In particular, it seems critical to better understand [[Goodhart's law]] and [[robust statistics]]. Recent work has yielded remarkable provable guarantees [https://arxiv.org/pdf/1911.05911.pdf DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19].
+
The most relevant mathematics to AI ethics is probably statistics. In particular, it seems critical to better understand [[Goodhart's law]] and [[robust statistics]]. Recent work has yielded remarkable provable guarantees [https://arxiv.org/pdf/1911.05911.pdf DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19], while empirical findings suggest a need for a new theory of [[overfitting]] [https://openreview.net/pdf?id=Sy8gdB9xx ZBHRV][https://dblp.org/rec/bibtex/conf/iclr/ZhangBHRV17 17].
  
 
The [[reinforcement learning]] framework provides numerous mathematical challenges. One area already widely investigated is (convex) [[online learning]]. The study of [[AIXI]] and variants is also a relevant area of study with lots of nice unsolved mathematical conjectures.
 
The [[reinforcement learning]] framework provides numerous mathematical challenges. One area already widely investigated is (convex) [[online learning]]. The study of [[AIXI]] and variants is also a relevant area of study with lots of nice unsolved mathematical conjectures.

Revision as of 17:26, 22 January 2020

Robustly beneficial decision-making raises numerous technical problems, especially in terms of provable guarantees. Such questions may be of interests to mathematicians.

Statistics

The most relevant mathematics to AI ethics is probably statistics. In particular, it seems critical to better understand Goodhart's law and robust statistics. Recent work has yielded remarkable provable guarantees DiakonikolasKane19, while empirical findings suggest a need for a new theory of overfitting ZBHRV17.

The reinforcement learning framework provides numerous mathematical challenges. One area already widely investigated is (convex) online learning. The study of AIXI and variants is also a relevant area of study with lots of nice unsolved mathematical conjectures.

Neural networks

Recently, insightful mathematics studying neural networks has been derived by taking the infinite-width limit JHG18. Remarkably, it was found that the learning of the neural networks could be described as a convex kernel gradient descent in the function space ZettaBytes18.

Algebraic topology

There may be results to investigate about the application of algebraic topology to distributed computing HKRBook13. This would be relevant as today's algorithms rely more and more on distributed learning.

Related to this, there is interesting questions around the generalization of Bayesian agreement. For instance, are there upper and lower bound on agreement in Byzantine Bayesian agreement?