Difference between revisions of "Robustly Beneficial group"

From RB Wiki
Line 25: Line 25:
 
* Exposure to opposing views on social media can increase political polarization, PNAS.  [https://www.pnas.org/content/pnas/115/37/9216.full.pdf BABBC+][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Exposure+to+opposing+views+on+social+media+can+increase+political+polarization&btnG= 18].
 
* Exposure to opposing views on social media can increase political polarization, PNAS.  [https://www.pnas.org/content/pnas/115/37/9216.full.pdf BABBC+][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Exposure+to+opposing+views+on+social+media+can+increase+political+polarization&btnG= 18].
 
* Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges, Statistical science: a review journal of the Institute of Mathematical Statistics. [https://arxiv.org/pdf/1507.08025.pdf VBW][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Multi-armed+Bandit+Models+for+the+Optimal+Design+of+Clinical+Trials%3A+Benefits+and+Challenges&btnG= 15].
 
* Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges, Statistical science: a review journal of the Institute of Mathematical Statistics. [https://arxiv.org/pdf/1507.08025.pdf VBW][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Multi-armed+Bandit+Models+for+the+Optimal+Design+of+Clinical+Trials%3A+Benefits+and+Challenges&btnG= 15].
* The complexity of agreement, STOC. [https://dl.acm.org/doi/pdf/10.1145/1060590.1060686Aaronson][https://dblp.org/rec/bibtex/conf/stoc/Aaronson05 05].
+
* The complexity of agreement, STOC. [https://dl.acm.org/doi/pdf/10.1145/1060590.1060686 Aaronson][https://dblp.org/rec/bibtex/conf/stoc/Aaronson05 05].
 
* Reward Tampering Problems and Solutions in Reinforcement Learning. [https://arxiv.org/pdf/1908.04734.pdf EverittHutter][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Reward+Tampering+Problems+and+Solutions+in+Reinforcement+Learning&btnG= 19].
 
* Reward Tampering Problems and Solutions in Reinforcement Learning. [https://arxiv.org/pdf/1908.04734.pdf EverittHutter][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Reward+Tampering+Problems+and+Solutions+in+Reinforcement+Learning&btnG= 19].
* AGI safety literature review, IJCAI. [https://arxiv.org/pdf/1805.01109 ELH][https://dblp.org/rec/bibtex/conf/ijcai/EverittLH1818].
+
* AGI safety literature review, IJCAI. [https://arxiv.org/pdf/1805.01109 ELH][https://dblp.org/rec/bibtex/conf/ijcai/EverittLH18 18].
 
* The global landscape of AI ethics guidelines, Nature. [https://www.nature.com/articles/s42256-019-0088-2 JIV][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+global+landscape+of+AI+ethics+guidelines&btnG= 19].
 
* The global landscape of AI ethics guidelines, Nature. [https://www.nature.com/articles/s42256-019-0088-2 JIV][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+global+landscape+of+AI+ethics+guidelines&btnG= 19].
 
* Tackling climate change with machine learning. [https://arxiv.org/pdf/1906.05433.pdf RDKKL+][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Tackling+climate+change+with+machine+learning&btnG= 19].
 
* Tackling climate change with machine learning. [https://arxiv.org/pdf/1906.05433.pdf RDKKL+][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Tackling+climate+change+with+machine+learning&btnG= 19].

Revision as of 10:30, 2 February 2020

The Robustly Beneficial group is an AI ethics group, started by Louis Faucon and Sergei Volodin, in Lausanne, Switzerland. The group is now managed by Louis Faucon, El Mhamdi El Mhamdi and Lê Nguyên Hoang. Every week, we discuss a paper relevant to AI ethics. Please feel free to ask to join.

Past papers

  • Experimental evidence of massive-scale emotional contagion through social networks. KGH14 RB3.
  • Recent Advances in Algorithmic High-Dimensional Robust Statistics. DiakonikolasKane19 RB2.
  • Algorithmic Accountability Reporting: On the Investigation of Black Boxes. Diakopoulos14 RB1.
  • Efficient and Thrifty Voting by Any Means Necessary, NeurIPS. MPSW19.
  • The Vulnerable World Hypothesis, Global Policy. Bostrom19.
  • Occam's razor is insufficient to infer the preferences of irrational agents, NeurIPS. ArmstrongMindermann18.
  • Supervising strong learners by amplifying weak experts. CSA18.
  • Embedded Agency. DemskiGarrabrant19.
  • Concrete Problems in AI Safety. AOSCSM16.
  • The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines. Bostrom12.
  • On the Limits of Recursively Self-Improving AGI, AGI. Yampolski15.
  • Can Intelligence Explode? Hutter12.
  • Risks from Learned Optimization in Advanced Machine Learning Systems. HMMSG19.
  • The Value Learning Problem, IJCAI. Soares16.

Candidate future papers

  • Why Philosophers Should Care About Computational Complexity, ECCC. Aaronson11.
  • Facebook language predicts depression in medical records, PNAS. ESMUC+18.
  • WeBuildAI: Participatory Framework for Algorithmic Governance, PACMHCI. LKKKY+19.
  • Exposure to opposing views on social media can increase political polarization, PNAS. BABBC+18.
  • Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges, Statistical science: a review journal of the Institute of Mathematical Statistics. VBW15.
  • The complexity of agreement, STOC. Aaronson05.
  • Reward Tampering Problems and Solutions in Reinforcement Learning. EverittHutter19.
  • AGI safety literature review, IJCAI. ELH18.
  • The global landscape of AI ethics guidelines, Nature. JIV19.
  • Tackling climate change with machine learning. RDKKL+19.