<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://robustlybeneficial.org/wiki/index.php?action=history&amp;feed=atom&amp;title=How_newcomers_can_contribute</id>
	<title>How newcomers can contribute - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://robustlybeneficial.org/wiki/index.php?action=history&amp;feed=atom&amp;title=How_newcomers_can_contribute"/>
	<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;action=history"/>
	<updated>2026-04-28T13:33:08Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.34.0</generator>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=79&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang at 16:32, 22 January 2020</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=79&amp;oldid=prev"/>
		<updated>2020-01-22T16:32:02Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 16:32, 22 January 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l14&quot; &gt;Line 14:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 14:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To do so, we encourage you to first form or to join a community of people willing to ponder AI ethics on a regular basis. As a newcomer, you should probably first focus on getting to know the other people, and to make sure you first get along with them. Community building is critical to motivation and sustainable contribution to any endeavor.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To do so, we encourage you to first form or to join a community of people willing to ponder AI ethics on a regular basis. As a newcomer, you should probably first focus on getting to know the other people, and to make sure you first get along with them. Community building is critical to motivation and sustainable contribution to any endeavor.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Because of the high risk of [[backfire effect]], it seems crucial to promote [[intellectual honesty]] [https://www.youtube.com/watch?v=V_E9-7t8QMI Galef19] and to encourage addressing our own [[cognitive bias|cognitive biases]] when discussing AI ethics. Promoting quality information and thinking seems like a very effective way to contribute indirectly to making algorithms [[robustly beneficial]].&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;If you somehow lose interest in the technical aspects of AI ethics, note that you can still have a huge impact on AI ethics by participating on team building and helping others network. Sometimes, just being there can go a long way to fostering a community and to sustain individuals' motivation.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;If you somehow lose interest in the technical aspects of AI ethics, note that you can still have a huge impact on AI ethics by participating on team building and helping others network. Sometimes, just being there can go a long way to fostering a community and to sustain individuals' motivation.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=75&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang at 13:58, 22 January 2020</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=75&amp;oldid=prev"/>
		<updated>2020-01-22T13:58:56Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 13:58, 22 January 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l3&quot; &gt;Line 3:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 3:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Find out more! ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Find out more! ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;If you are new to AI ethics, we suggest you first read our [[&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;AI risks&lt;/del&gt;]] page, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;which gives general AI ethics challenges raised by algorithms, our &lt;/del&gt;[[&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;YouTube&lt;/del&gt;]] page&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, which discusses the specific case of the YouTube algorithm, &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;our &lt;/del&gt;[[&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;robustly beneficial|Robustly Beneficial&lt;/del&gt;]] page &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;which stresses the importance of robustness for AI ethics&lt;/del&gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;If you are new to AI ethics, we suggest you first read our [[&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;robustly beneficial|Robustly Beneficial&lt;/ins&gt;]] page, [[&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;AI risks&lt;/ins&gt;]] page and [[&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;YouTube&lt;/ins&gt;]] page. &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Please also &lt;/ins&gt;keep in mind that &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;these &lt;/ins&gt;pages barely scratch the surface of what ought to be understood to undertake &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;[[&lt;/ins&gt;robustly beneficial&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;]] &lt;/ins&gt;actions. AI ethics is very complicated and full of counterintuitive aspects &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;(see [[backfire effect]])&lt;/ins&gt;. Caution is advised before advocating radical actions or policies.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;While these may be interesting introductions to AI ethics, it is important to &lt;/del&gt;keep in mind that &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;such &lt;/del&gt;pages barely scratch the surface of what ought to be understood to undertake robustly beneficial actions. AI ethics is very complicated and full of counterintuitive aspects. Caution is advised before advocating radical actions or policies.&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To contribute to AI ethics in a robustly beneficial manner, we recommend that you inform yourself on the topic on a regular basis, for instance by following this Wiki, by listening to our [https://www.youtube.com/channel/UCgl_MmjatQif8juz3Lt6iPw Robustly Beneficial Podcast] or by reading Lê and Mahdi's book [https://laboutique.edpsciences.fr/produit/1107/9782759824304/Le%20fabuleux%20chantier HoangElmhamdi][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Le+fabuleux+chantier%3A+Rendre+l%27intelligence+artificielle+robustement+b%C3%A9n%C3%A9fique&amp;amp;btnG= 19&amp;lt;sup&amp;gt;FR&amp;lt;/sup&amp;gt;] (English version is pending).&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To contribute to AI ethics in a robustly beneficial manner, we recommend that you inform yourself on the topic on a regular basis, for instance by following this Wiki, by listening to our [https://www.youtube.com/channel/UCgl_MmjatQif8juz3Lt6iPw Robustly Beneficial Podcast] or by reading Lê and Mahdi's book [https://laboutique.edpsciences.fr/produit/1107/9782759824304/Le%20fabuleux%20chantier HoangElmhamdi][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Le+fabuleux+chantier%3A+Rendre+l%27intelligence+artificielle+robustement+b%C3%A9n%C3%A9fique&amp;amp;btnG= 19&amp;lt;sup&amp;gt;FR&amp;lt;/sup&amp;gt;] (English version is pending).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=64&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang at 09:34, 22 January 2020</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=64&amp;oldid=prev"/>
		<updated>2020-01-22T09:34:31Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 09:34, 22 January 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l3&quot; &gt;Line 3:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 3:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Find out more! ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Find out more! ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;If you are new to AI ethics, we suggest you first read our [[AI risks]] page, which gives general AI ethics challenges raised by algorithms, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;or &lt;/del&gt;our [[YouTube]] page, which discusses the specific case of the YouTube algorithm.  &lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;If you are new to AI ethics, we suggest you first read our [[AI risks]] page, which gives general AI ethics challenges raised by algorithms, our [[YouTube]] page, which discusses the specific case of the YouTube algorithm&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, and our [[robustly beneficial|Robustly Beneficial]] page which stresses the importance of robustness for AI ethics&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;While these may be interesting introductions to AI ethics, it is important to keep in mind that such pages barely scratch the surface of what ought to be understood to undertake robustly beneficial actions. AI ethics is very complicated and full of counterintuitive aspects. Caution is advised before advocating radical actions or policies.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;While these may be interesting introductions to AI ethics, it is important to keep in mind that such pages barely scratch the surface of what ought to be understood to undertake robustly beneficial actions. AI ethics is very complicated and full of counterintuitive aspects. Caution is advised before advocating radical actions or policies.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=56&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang: Created page with &quot;This page suggests ideas for newcomers to contribute to AI ethics. Our main suggestion is to familiarize oneself with the most important problems with the domain. We also prop...&quot;</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=How_newcomers_can_contribute&amp;diff=56&amp;oldid=prev"/>
		<updated>2020-01-21T20:21:50Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;This page suggests ideas for newcomers to contribute to AI ethics. Our main suggestion is to familiarize oneself with the most important problems with the domain. We also prop...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;This page suggests ideas for newcomers to contribute to AI ethics. Our main suggestion is to familiarize oneself with the most important problems with the domain. We also propose more actionable plans.&lt;br /&gt;
&lt;br /&gt;
== Find out more! ==&lt;br /&gt;
&lt;br /&gt;
If you are new to AI ethics, we suggest you first read our [[AI risks]] page, which gives general AI ethics challenges raised by algorithms, or our [[YouTube]] page, which discusses the specific case of the YouTube algorithm. &lt;br /&gt;
&lt;br /&gt;
While these may be interesting introductions to AI ethics, it is important to keep in mind that such pages barely scratch the surface of what ought to be understood to undertake robustly beneficial actions. AI ethics is very complicated and full of counterintuitive aspects. Caution is advised before advocating radical actions or policies.&lt;br /&gt;
&lt;br /&gt;
To contribute to AI ethics in a robustly beneficial manner, we recommend that you inform yourself on the topic on a regular basis, for instance by following this Wiki, by listening to our [https://www.youtube.com/channel/UCgl_MmjatQif8juz3Lt6iPw Robustly Beneficial Podcast] or by reading Lê and Mahdi's book [https://laboutique.edpsciences.fr/produit/1107/9782759824304/Le%20fabuleux%20chantier HoangElmhamdi][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;q=Le+fabuleux+chantier%3A+Rendre+l%27intelligence+artificielle+robustement+b%C3%A9n%C3%A9fique&amp;amp;btnG= 19&amp;lt;sup&amp;gt;FR&amp;lt;/sup&amp;gt;] (English version is pending).&lt;br /&gt;
&lt;br /&gt;
In addition, we strongly recommend the books by [https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598 Tegmark17] [https://www.amazon.co.uk/Human-Compatible-AI-Problem-Control/dp/0241335205 Russell19] [https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815 ONeil16] [https://www.amazon.com/AI-Superpowers-China-Silicon-Valley/dp/132854639X Lee18] [https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ Bostrom14], the YouTube channel by [https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg Robert Miles] and [https://www.youtube.com/user/keeroyz Two Minute Papers], and the Podcasts [https://80000hours.org/podcast/ 80,000 Hours], [https://humanetech.com/podcast/ Your Undivided Attention], [https://futureoflife.org/ai-alignment-podcast/ Future of Life's AI Alignment], [https://lexfridman.com/ai/ Lex Friedman's MIT AGI Podcast] and [https://www.flashforwardpod.com/ Flashforward].&lt;br /&gt;
&lt;br /&gt;
== Join/Build a community ==&lt;br /&gt;
&lt;br /&gt;
We believe that motivation is critical to contribute effectively to AI ethics. Unfortunately, motivation is a scarce resource that gets depleted easily. This is why it seems crucial to take care of our motivation.&lt;br /&gt;
&lt;br /&gt;
To do so, we encourage you to first form or to join a community of people willing to ponder AI ethics on a regular basis. As a newcomer, you should probably first focus on getting to know the other people, and to make sure you first get along with them. Community building is critical to motivation and sustainable contribution to any endeavor.&lt;br /&gt;
&lt;br /&gt;
If you somehow lose interest in the technical aspects of AI ethics, note that you can still have a huge impact on AI ethics by participating on team building and helping others network. Sometimes, just being there can go a long way to fostering a community and to sustain individuals' motivation.&lt;/div&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
</feed>