Difference between revisions of "How systems developers can contribute"

From RB Wiki
(Created page with "Robustly beneficial decision making algorithms require robust system that reliably implement the specifications they were meant for.")
 
 
Line 1: Line 1:
[[Robustly beneficial]] decision making algorithms require robust system that reliably implement the specifications they were meant for.
+
[[Robustly beneficial]] decision making algorithms require robust systems and software that reliably implement the specifications they were meant for.
 +
 
 +
 
 +
== Reliable ML frameworks ==
 +
 
 +
Machine learning software such as Tensorflow and Pytorch is mostly based on the vanilla versions of ML algorithms which rarely include robustness and safety properties given the effect these properties could (or are believed to) have on performance.
 +
 
 +
An example is that all the available frameworks use averaging as a way to aggregate gradients, which
 +
 
 +
== Distributed systems for AI ==
 +
 
 +
 
 +
 
 +
== Sandboxing AI ==
 +
 
 +
Several proposals (e.g. in Russel2019 and Tegmark2016) have been made to ensure that intelligent decision making software is properly sandboxed in order to confine it and better control its actions. It is however argued that the focus should be put instead on ensuring alignment a priori, since sandboxing could be hopeless given how much distributed modern AI systems are by design (Hoang&Elmhamdi19)

Latest revision as of 14:55, 27 January 2020

Robustly beneficial decision making algorithms require robust systems and software that reliably implement the specifications they were meant for.


Reliable ML frameworks

Machine learning software such as Tensorflow and Pytorch is mostly based on the vanilla versions of ML algorithms which rarely include robustness and safety properties given the effect these properties could (or are believed to) have on performance.

An example is that all the available frameworks use averaging as a way to aggregate gradients, which

Distributed systems for AI

Sandboxing AI

Several proposals (e.g. in Russel2019 and Tegmark2016) have been made to ensure that intelligent decision making software is properly sandboxed in order to confine it and better control its actions. It is however argued that the focus should be put instead on ensuring alignment a priori, since sandboxing could be hopeless given how much distributed modern AI systems are by design (Hoang&Elmhamdi19)