Difference between revisions of "How newcomers can contribute"

From RB Wiki
m
m
 
(One intermediate revision by the same user not shown)
Line 3: Line 3:
 
== Find out more! ==
 
== Find out more! ==
  
If you are new to AI ethics, we suggest you first read our [[AI risks]] page, which gives general AI ethics challenges raised by algorithms, our [[YouTube]] page, which discusses the specific case of the YouTube algorithm, and our [[robustly beneficial|Robustly Beneficial]] page which stresses the importance of robustness for AI ethics.
+
If you are new to AI ethics, we suggest you first read our [[robustly beneficial|Robustly Beneficial]] page, [[AI risks]] page and [[YouTube]] page. Please also keep in mind that these pages barely scratch the surface of what ought to be understood to undertake [[robustly beneficial]] actions. AI ethics is very complicated and full of counterintuitive aspects (see [[backfire effect]]). Caution is advised before advocating radical actions or policies.
 
 
While these may be interesting introductions to AI ethics, it is important to keep in mind that such pages barely scratch the surface of what ought to be understood to undertake robustly beneficial actions. AI ethics is very complicated and full of counterintuitive aspects. Caution is advised before advocating radical actions or policies.
 
  
 
To contribute to AI ethics in a robustly beneficial manner, we recommend that you inform yourself on the topic on a regular basis, for instance by following this Wiki, by listening to our [https://www.youtube.com/channel/UCgl_MmjatQif8juz3Lt6iPw Robustly Beneficial Podcast] or by reading Lê and Mahdi's book [https://laboutique.edpsciences.fr/produit/1107/9782759824304/Le%20fabuleux%20chantier HoangElmhamdi][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Le+fabuleux+chantier%3A+Rendre+l%27intelligence+artificielle+robustement+b%C3%A9n%C3%A9fique&btnG= 19<sup>FR</sup>] (English version is pending).
 
To contribute to AI ethics in a robustly beneficial manner, we recommend that you inform yourself on the topic on a regular basis, for instance by following this Wiki, by listening to our [https://www.youtube.com/channel/UCgl_MmjatQif8juz3Lt6iPw Robustly Beneficial Podcast] or by reading Lê and Mahdi's book [https://laboutique.edpsciences.fr/produit/1107/9782759824304/Le%20fabuleux%20chantier HoangElmhamdi][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Le+fabuleux+chantier%3A+Rendre+l%27intelligence+artificielle+robustement+b%C3%A9n%C3%A9fique&btnG= 19<sup>FR</sup>] (English version is pending).
Line 16: Line 14:
  
 
To do so, we encourage you to first form or to join a community of people willing to ponder AI ethics on a regular basis. As a newcomer, you should probably first focus on getting to know the other people, and to make sure you first get along with them. Community building is critical to motivation and sustainable contribution to any endeavor.
 
To do so, we encourage you to first form or to join a community of people willing to ponder AI ethics on a regular basis. As a newcomer, you should probably first focus on getting to know the other people, and to make sure you first get along with them. Community building is critical to motivation and sustainable contribution to any endeavor.
 +
 +
Because of the high risk of [[backfire effect]], it seems crucial to promote [[intellectual honesty]] [https://www.youtube.com/watch?v=V_E9-7t8QMI Galef19] and to encourage addressing our own [[cognitive bias|cognitive biases]] when discussing AI ethics. Promoting quality information and thinking seems like a very effective way to contribute indirectly to making algorithms [[robustly beneficial]].
  
 
If you somehow lose interest in the technical aspects of AI ethics, note that you can still have a huge impact on AI ethics by participating on team building and helping others network. Sometimes, just being there can go a long way to fostering a community and to sustain individuals' motivation.
 
If you somehow lose interest in the technical aspects of AI ethics, note that you can still have a huge impact on AI ethics by participating on team building and helping others network. Sometimes, just being there can go a long way to fostering a community and to sustain individuals' motivation.

Latest revision as of 17:32, 22 January 2020

This page suggests ideas for newcomers to contribute to AI ethics. Our main suggestion is to familiarize oneself with the most important problems with the domain. We also propose more actionable plans.

Find out more!

If you are new to AI ethics, we suggest you first read our Robustly Beneficial page, AI risks page and YouTube page. Please also keep in mind that these pages barely scratch the surface of what ought to be understood to undertake robustly beneficial actions. AI ethics is very complicated and full of counterintuitive aspects (see backfire effect). Caution is advised before advocating radical actions or policies.

To contribute to AI ethics in a robustly beneficial manner, we recommend that you inform yourself on the topic on a regular basis, for instance by following this Wiki, by listening to our Robustly Beneficial Podcast or by reading Lê and Mahdi's book HoangElmhamdi19FR (English version is pending).

In addition, we strongly recommend the books by Tegmark17 Russell19 ONeil16 Lee18 Bostrom14, the YouTube channel by Robert Miles and Two Minute Papers, and the Podcasts 80,000 Hours, Your Undivided Attention, Future of Life's AI Alignment, Lex Friedman's MIT AGI Podcast and Flashforward.

Join/Build a community

We believe that motivation is critical to contribute effectively to AI ethics. Unfortunately, motivation is a scarce resource that gets depleted easily. This is why it seems crucial to take care of our motivation.

To do so, we encourage you to first form or to join a community of people willing to ponder AI ethics on a regular basis. As a newcomer, you should probably first focus on getting to know the other people, and to make sure you first get along with them. Community building is critical to motivation and sustainable contribution to any endeavor.

Because of the high risk of backfire effect, it seems crucial to promote intellectual honesty Galef19 and to encourage addressing our own cognitive biases when discussing AI ethics. Promoting quality information and thinking seems like a very effective way to contribute indirectly to making algorithms robustly beneficial.

If you somehow lose interest in the technical aspects of AI ethics, note that you can still have a huge impact on AI ethics by participating on team building and helping others network. Sometimes, just being there can go a long way to fostering a community and to sustain individuals' motivation.