How newcomers can contribute

From RB Wiki
Revision as of 17:32, 22 January 2020 by Lê Nguyên Hoang (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This page suggests ideas for newcomers to contribute to AI ethics. Our main suggestion is to familiarize oneself with the most important problems with the domain. We also propose more actionable plans.

Find out more!

If you are new to AI ethics, we suggest you first read our Robustly Beneficial page, AI risks page and YouTube page. Please also keep in mind that these pages barely scratch the surface of what ought to be understood to undertake robustly beneficial actions. AI ethics is very complicated and full of counterintuitive aspects (see backfire effect). Caution is advised before advocating radical actions or policies.

To contribute to AI ethics in a robustly beneficial manner, we recommend that you inform yourself on the topic on a regular basis, for instance by following this Wiki, by listening to our Robustly Beneficial Podcast or by reading Lê and Mahdi's book HoangElmhamdi19FR (English version is pending).

In addition, we strongly recommend the books by Tegmark17 Russell19 ONeil16 Lee18 Bostrom14, the YouTube channel by Robert Miles and Two Minute Papers, and the Podcasts 80,000 Hours, Your Undivided Attention, Future of Life's AI Alignment, Lex Friedman's MIT AGI Podcast and Flashforward.

Join/Build a community

We believe that motivation is critical to contribute effectively to AI ethics. Unfortunately, motivation is a scarce resource that gets depleted easily. This is why it seems crucial to take care of our motivation.

To do so, we encourage you to first form or to join a community of people willing to ponder AI ethics on a regular basis. As a newcomer, you should probably first focus on getting to know the other people, and to make sure you first get along with them. Community building is critical to motivation and sustainable contribution to any endeavor.

Because of the high risk of backfire effect, it seems crucial to promote intellectual honesty Galef19 and to encourage addressing our own cognitive biases when discussing AI ethics. Promoting quality information and thinking seems like a very effective way to contribute indirectly to making algorithms robustly beneficial.

If you somehow lose interest in the technical aspects of AI ethics, note that you can still have a huge impact on AI ethics by participating on team building and helping others network. Sometimes, just being there can go a long way to fostering a community and to sustain individuals' motivation.