AI risks

From RB Wiki
Revision as of 10:34, 26 January 2020 by Lê Nguyên Hoang (talk | contribs) (→‎A list of risks)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Numerours AI risks have been listed in books like Bostrom14 RDT15 ONeil16 Tegmark17 Lee18 Sharre18 Russell19 HoangElmhamdi19FR.

A list of risks

They include cyberbullying HindujaPatchin10, fairness MMSLG19, privacy TCK19, increased inequalities Makridakis17, job displacement MartensTolan18, radicalization ROWAM20, political manipulation WoolleyHoward+18, misinformation WMCL19 VRA18, mute news (information that is drowned within the flood of information), information overload Roetzel19, anger BergerMilkman12, hate SchmidtWiegand17, geopolitical tensions, addiction TBB18 HawiSamaha16, inability to focus MICJS16, mental health ESMUC+18, loss of control OrseauArmstrong16, global (financial) crisis through unexpected disruption, resource depletion, autonomous weapons RHAV15, arms race Geist16, instrumental goals BensontilsenSoares16 and existential risk Yudkowsky08 Bostrom13.

While there is already a lot of research, even more research on AI risks seems desirable, especially in areas with huge stakes and great uncertainty that could be reduced by more data collection or better thinking (see optimal exploration).

Security mindset

It has been argued Yudkowsky17 that there is a lack of security mindset, especially regarding performant large-scale algorithms. Security mindset can be regarded as focusing on worst-case (or near-worst-case) scenarios, especially if they are not completely unlikely and pose huge risks. This is in sharp contrast with Facebook's old motto "move fast and break things" Taplin17; and more generally with today's agile software development. It is also in sharp contrast with current trial-and-error machine learning development, especially in light of Goodhart's law and the flaws of testing [work in progress].

More influential algorithms surely represent greater risks. Arguably, this is already evidenced by the case of YouTube. But crucially, we should not be overfitting on today's deployed algorithms to ponder risks from tomorrow's algorithms. Recall that the rise of machine learning algorithm is quite recent, and very spectacular. The security mindset urges us to prepare for a similar, if not much faster, rate of progress in the coming years. Not necessarily because this scenario is more likely. But because it is not completely unlikely, and poses much greater risks.

The security mindset is argued to be hugely neglected by Bostrom18, especially given the current fast rate of discoveries and innovations. The paper also discusses global governance that may greatly increase or reduce risks caused by dangerous discoveries. This seems critical to AI governance.

Side effects

It is crucial to note that nearly all the above-mentioned AI risks are side effects of the large-scale deployment of algorithms (the main exception is autonomous weapons explicitly designed to cause harm). In particular, AI risks mostly don't come from the malicious intent of some algorithm, software developer or company. They mostly seem to arise from algorithms, software developers or companies' (sometimes intentional) tendency to neglect side effects of their behaviors.

To better understand this problem, it is worth comparing it to climate change. Greenhouse gas emission is not the intent of companies, car drivers and data centers. But it has become a problem because it was discarded as the companies', drivers' or data centers' goal and responsibility, sometimes intentionally, but also often by mere ignorance of the risks of such emissions.

The AI risk problem is similar. Unless we pay great attention to potential side effects and actively try to avoid them, the large-scale deployment of our algorithms will inevitably cause unintended potentially deadly side effects — as was the empowering of anti-vaccination propagandas (see YouTube).

Side effects of algorithms can also occur because of vulnerabilities of algorithms, typically to adversarial attacks GSS15 BMGS17 NeffNagi16.

Side effects were listed as the main concrete problem in AI safety by AOSCSM16. They are especially concerning for large-scale algorithms like social media recommender systems scholar. And they may be even more so, as such algorithms acquire more and more performant planning capabilities, built upon a much better modeling of their environments (see YouTube, human-level AI, AIXI).

In particular, algorithms with performant long-term planning capabilities feature risks of instrumental convergence. This corresponds to achieving instrumental goals like resource acquisition, which may endanger human activities or survival.

There seems to be a consensus that the only robust solution to AI safety and AI ethics is alignment, typically based on volition learning Yudkowsky04 and social choice aggregation NGAD+18. A lot more research in this direction is needed.