<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://robustlybeneficial.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Robustly_beneficial</id>
	<title>Robustly beneficial - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://robustlybeneficial.org/wiki/index.php?action=history&amp;feed=atom&amp;title=Robustly_beneficial"/>
	<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;action=history"/>
	<updated>2026-04-29T10:42:23Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.34.0</generator>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=103&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang: /* Robustly beneficial to distributional shift */</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=103&amp;oldid=prev"/>
		<updated>2020-01-26T09:03:50Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Robustly beneficial to distributional shift&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 09:03, 26 January 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l27&quot; &gt;Line 27:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 27:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to distributional shift ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to distributional shift ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Another difficulty for both humans and algorithms is that we are often trained in a given environment. Yet, the environment we live in may be different. Worse, our environment is always changing, arguably more so these days than ever in human history. This phenomenon is known as [[distributional shift]] [https://papers.nips.cc/paper/9547-can-you-trust-your-models-uncertainty-evaluating-predictive-uncertainty-under-dataset-shift.pdf OFRNS+][https://&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;scholar&lt;/del&gt;.&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;google.ch&lt;/del&gt;/&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;as_ylo=2019&amp;amp;q=Can+You+Trust+Your+Model%E2%80%99s+Uncertainty%3F+Evaluating+Predictive+Uncertainty+Under+Dataset+Shift&amp;amp;btnG= &lt;/del&gt;19].&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Another difficulty for both humans and algorithms is that we are often trained in a given environment. Yet, the environment we live in may be different. Worse, our environment is always changing, arguably more so these days than ever in human history. This phenomenon is known as [[distributional shift]] [https://papers.nips.cc/paper/9547-can-you-trust-your-models-uncertainty-evaluating-predictive-uncertainty-under-dataset-shift.pdf OFRNS+][https://&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;dblp&lt;/ins&gt;.&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;org/rec/bibtex/conf/nips&lt;/ins&gt;/&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;SnoekOFLNSDRN19 &lt;/ins&gt;19].&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Unfortunately, today's algorithms are hardly robust to [[distributional shift]], as they are often trained in limited environment which are very different from, say, the YouTube ecosystem. There may be some hope though that, as algorithms become more and more sophisticated and train with large and larger datasets, they may learn to fit deeper patterns rather than [https://tylervigen.com/spurious-correlations spurious correlations].  &lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Unfortunately, today's algorithms are hardly robust to [[distributional shift]], as they are often trained in limited environment which are very different from, say, the YouTube ecosystem. There may be some hope though that, as algorithms become more and more sophisticated and train with large and larger datasets, they may learn to fit deeper patterns rather than [https://tylervigen.com/spurious-correlations spurious correlations].&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to malicious entities ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to malicious entities ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=102&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang at 09:02, 26 January 2020</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=102&amp;oldid=prev"/>
		<updated>2020-01-26T09:02:47Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 09:02, 26 January 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l9&quot; &gt;Line 9:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 9:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to biased data ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to biased data ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Both humans and algorithms strongly rely on data for learning and decision-making. Unfortunately, such collected data must be assumed to be [[algorithmic bias|biased]]. Typically, data that are easier to collect will often be over-represented in humans' and algorithms' learning datasets. Additional biases may be due to the fact that the data that humans and algorithms learn from has been pre-processed by other humans or algorithms, whose data processing is inevitably itself biased (though hopefully biased only towards being more informative).&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Both humans and algorithms strongly rely on data for learning and decision-making. Unfortunately, such collected data must be assumed to be [[algorithmic bias|biased]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;] [https://arxiv.org/pdf/1908.09635 MMSLG][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;as_ylo=2019&amp;amp;q=A+survey+on+bias+and+fairness+in+machine+learning&amp;amp;btnG= 19&lt;/ins&gt;]. Typically, data that are easier to collect will often be over-represented in humans' and algorithms' learning datasets. Additional biases may be due to the fact that the data that humans and algorithms learn from has been pre-processed by other humans or algorithms, whose data processing is inevitably itself biased (though hopefully biased only towards being more informative).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Meta-data]] such as [[data certification]] are probably going to be critical to design algorithms that are robust to biased data. Perhaps more importantly, [[alignment]] could be the only way to make sure that the decision-making of algorithms does not repeat undesirable historical patterns.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Meta-data]] such as [[data certification]] are probably going to be critical to design algorithms that are robust to biased data. Perhaps more importantly, [[alignment]] could be the only way to make sure that the decision-making of algorithms does not repeat undesirable historical patterns.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l27&quot; &gt;Line 27:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 27:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to distributional shift ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to distributional shift ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Another difficulty for both humans and algorithms is that we are often trained in a given environment. Yet, the environment we live in may be different. Worse, our environment is always changing, arguably more so these days than ever in human history. This phenomenon is known as [[distributional shift]].&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Another difficulty for both humans and algorithms is that we are often trained in a given environment. Yet, the environment we live in may be different. Worse, our environment is always changing, arguably more so these days than ever in human history. This phenomenon is known as [[distributional shift]&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;] [https://papers.nips.cc/paper/9547-can-you-trust-your-models-uncertainty-evaluating-predictive-uncertainty-under-dataset-shift.pdf OFRNS+][https://scholar.google.ch/scholar?hl=en&amp;amp;as_sdt=0%2C5&amp;amp;as_ylo=2019&amp;amp;q=Can+You+Trust+Your+Model%E2%80%99s+Uncertainty%3F+Evaluating+Predictive+Uncertainty+Under+Dataset+Shift&amp;amp;btnG= 19&lt;/ins&gt;].&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Unfortunately, today's algorithms are hardly robust to [[distributional shift]], as they are often trained in limited environment which are very different from, say, the YouTube ecosystem. There may be some hope though that, as algorithms become more and more sophisticated and train with large and larger datasets, they may learn to fit deeper patterns rather than [https://tylervigen.com/spurious-correlations spurious correlations].  &lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Unfortunately, today's algorithms are hardly robust to [[distributional shift]], as they are often trained in limited environment which are very different from, say, the YouTube ecosystem. There may be some hope though that, as algorithms become more and more sophisticated and train with large and larger datasets, they may learn to fit deeper patterns rather than [https://tylervigen.com/spurious-correlations spurious correlations].  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=65&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang at 09:36, 22 January 2020</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=65&amp;oldid=prev"/>
		<updated>2020-01-22T09:36:33Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 09:36, 22 January 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l5&quot; &gt;Line 5:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 5:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Both humans and algorithms are victims of errors, sometimes called &amp;lt;em&amp;gt;bugs&amp;lt;/em&amp;gt;. A robustly beneficial algorithm must remain beneficial, even if there were errors in its implementation or in its execution.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Both humans and algorithms are victims of errors, sometimes called &amp;lt;em&amp;gt;bugs&amp;lt;/em&amp;gt;. A robustly beneficial algorithm must remain beneficial, even if there were errors in its implementation or in its execution.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Program [[verification]] and [[distributed learning|distributed crash tolerance]] are critical to guarantee such robustness, though they are extremely hard to implement for large-scale [[machine learning]] systems. Better understanding the tradeoff between robustness to errors and efficiency is a critical aspect of AI ethics.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Program [[verification]] and [[distributed learning|distributed crash tolerance]] are critical to guarantee such robustness, though they are extremely hard to implement for large-scale [[machine learning]] systems&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. Moreover, learning raises other concerns like [[reward hacking]]&lt;/ins&gt;. Better understanding the tradeoff between robustness to errors and efficiency is a critical aspect of AI ethics.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to biased data ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Robustly beneficial to biased data ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
	<entry>
		<id>https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=63&amp;oldid=prev</id>
		<title>Lê Nguyên Hoang: Created page with &quot;This wiki argues that it is central to AI ethics to aim at being &lt;strong&gt;robustly beneficial&lt;/strong&gt;. In particular, robustness refers to taking into account all the subtleti...&quot;</title>
		<link rel="alternate" type="text/html" href="https://robustlybeneficial.org/wiki/index.php?title=Robustly_beneficial&amp;diff=63&amp;oldid=prev"/>
		<updated>2020-01-22T09:33:40Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;This wiki argues that it is central to AI ethics to aim at being &amp;lt;strong&amp;gt;robustly beneficial&amp;lt;/strong&amp;gt;. In particular, robustness refers to taking into account all the subtleti...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;This wiki argues that it is central to AI ethics to aim at being &amp;lt;strong&amp;gt;robustly beneficial&amp;lt;/strong&amp;gt;. In particular, robustness refers to taking into account all the subtleties of decision-making that both humans and algorithms too often neglect.&lt;br /&gt;
&lt;br /&gt;
== Robustly beneficial to errors ==&lt;br /&gt;
&lt;br /&gt;
Both humans and algorithms are victims of errors, sometimes called &amp;lt;em&amp;gt;bugs&amp;lt;/em&amp;gt;. A robustly beneficial algorithm must remain beneficial, even if there were errors in its implementation or in its execution.&lt;br /&gt;
&lt;br /&gt;
Program [[verification]] and [[distributed learning|distributed crash tolerance]] are critical to guarantee such robustness, though they are extremely hard to implement for large-scale [[machine learning]] systems. Better understanding the tradeoff between robustness to errors and efficiency is a critical aspect of AI ethics.&lt;br /&gt;
&lt;br /&gt;
== Robustly beneficial to biased data ==&lt;br /&gt;
&lt;br /&gt;
Both humans and algorithms strongly rely on data for learning and decision-making. Unfortunately, such collected data must be assumed to be [[algorithmic bias|biased]]. Typically, data that are easier to collect will often be over-represented in humans' and algorithms' learning datasets. Additional biases may be due to the fact that the data that humans and algorithms learn from has been pre-processed by other humans or algorithms, whose data processing is inevitably itself biased (though hopefully biased only towards being more informative).&lt;br /&gt;
&lt;br /&gt;
[[Meta-data]] such as [[data certification]] are probably going to be critical to design algorithms that are robust to biased data. Perhaps more importantly, [[alignment]] could be the only way to make sure that the decision-making of algorithms does not repeat undesirable historical patterns.&lt;br /&gt;
&lt;br /&gt;
== Robustly beneficial to flawed world model ==&lt;br /&gt;
&lt;br /&gt;
Both humans and algorithms strongly rely on limited data to infer their world model. Given this, the world model cannot be complete. As an example, it is impossible for them to know the exact number of living humans at a given instant. All must acknowledge [[epistemic uncertainty]] on their world model.&lt;br /&gt;
&lt;br /&gt;
As a result, AI ethics must address decision-making under [[epistemic uncertainty]]. Most importantly, it needs to take into account this uncertainty to avoid decision-making given a flawed world model, which may be greatly counter-productive if not catastrophic in the actual world. [[Bayesianism|Bayesian]] principles and [[second opinion querying]] will likely be critical to this.&lt;br /&gt;
&lt;br /&gt;
== Robustly beneficial to unforeseen side effects ==&lt;br /&gt;
&lt;br /&gt;
When making decisions, both humans and algorithms have to neglect features of their world model, because their world model may be too complex to be analyzed. This is particularly relevant for fast decision-making, say in days for important human decision-making, or in milliseconds in the case for recommendation by the [[YouTube]] algorithm. This constraint usually motivates us to neglect unforeseen side effects, which is a leading concern for [[AI risks]].&lt;br /&gt;
&lt;br /&gt;
This is particularly worrying in complex interacting environments such as social medias, where tweaks of recommender algorithms may change users' beliefs and preferences in unforeseen manners (see [[backfire effect]]). This concern is highlighted by [[Goodhart's law]]. Being robustly beneficial to unforeseen side effects may be today's greatest challenge in AI ethics, and seems unfortunately very neglected.&lt;br /&gt;
&lt;br /&gt;
== Robustly beneficial to distributional shift ==&lt;br /&gt;
&lt;br /&gt;
Another difficulty for both humans and algorithms is that we are often trained in a given environment. Yet, the environment we live in may be different. Worse, our environment is always changing, arguably more so these days than ever in human history. This phenomenon is known as [[distributional shift]].&lt;br /&gt;
&lt;br /&gt;
Unfortunately, today's algorithms are hardly robust to [[distributional shift]], as they are often trained in limited environment which are very different from, say, the YouTube ecosystem. There may be some hope though that, as algorithms become more and more sophisticated and train with large and larger datasets, they may learn to fit deeper patterns rather than [https://tylervigen.com/spurious-correlations spurious correlations]. &lt;br /&gt;
&lt;br /&gt;
== Robustly beneficial to malicious entities ==&lt;br /&gt;
&lt;br /&gt;
Still another difficulty occurs when humans and algorithms become more and more influential. At some point, we have to expect malicious entities to try to hack humans and algorithms for their benefit, or sometimes just because they want to destruct the most influential entities. This is definitely already occurring for the most influential algorithms, like [[YouTube]], Google or Facebook, through [https://www.wired.co.uk/article/youtube-pedophile-videos-advertising adversarial attacks of pedophile moderation algorithms], SEO optimization or [[misinformation]]-based targeted political campaigns.&lt;br /&gt;
&lt;br /&gt;
There has been recent progresses to understand [[adversarial attacks]], typically by designing [[robust statistics]]. But a lot more work in this direction is needed.&lt;br /&gt;
&lt;br /&gt;
== Robustly beneficial to moral uncertainty ==&lt;br /&gt;
&lt;br /&gt;
Finally, one last important robustness requirement is robustness to moral uncertainty. Given humans' disagreements on what is desirable, for instance in terms of hate speech moderation, it seems crucial to acknowledge that we don't really know what algorithms should aim to do. Having said this, it seems just as important to acknowledge that we do have a lot of common ground, say in terms of murder video moderation on YouTube. &lt;br /&gt;
&lt;br /&gt;
There has been exciting recent developments in [[inverse reinforcement learning]] and [[social choice]] to better learn what users prefer and to aggregate diverging views. But a lot more research on designing practical reliable algorithms and on the [[interpretability]] of such algorithms seems needed.&lt;br /&gt;
&lt;br /&gt;
Also, it is noteworthy that moral uncertainty is far from limited to interpersonal disagreements. Humans often have a hard time articulating their preferences. And even when they do, such revealed preferences are often full of inconsistencies because of [[cognitive bias]]. In fact, our future selves often disagree with our present selves' preferences. Robustness to moral uncertainty should also take this into account. The research on [[volition]] tries to address this issue, but this research has barely begun so far. It seems urgent to kickstart it effectively.&lt;/div&gt;</summary>
		<author><name>Lê Nguyên Hoang</name></author>
		
	</entry>
</feed>