Difference between revisions of "Robustly Beneficial group"

From RB Wiki
Line 3: Line 3:
 
== Past papers ==
 
== Past papers ==
  
Experimental evidence of massive-scale emotional contagion through social networks. [https://www.pnas.org/content/pnas/111/24/8788.full.pdf KGH][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Experimental+evidence+of+massive-scale+emotional+contagion+through+social+networks&btnG= 14] [https://www.youtube.com/watch?v=gQHvTow91FY RB3].
+
* Experimental evidence of massive-scale emotional contagion through social networks. [https://www.pnas.org/content/pnas/111/24/8788.full.pdf KGH][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Experimental+evidence+of+massive-scale+emotional+contagion+through+social+networks&btnG= 14] [https://www.youtube.com/watch?v=gQHvTow91FY RB3].
Recent Advances in Algorithmic High-Dimensional Robust Statistics. [https://arxiv.org/pdf/1911.05911 DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19] [https://www.youtube.com/watch?v=QguWgfGsG-k RB2].
+
* Recent Advances in Algorithmic High-Dimensional Robust Statistics. [https://arxiv.org/pdf/1911.05911 DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19] [https://www.youtube.com/watch?v=QguWgfGsG-k RB2].
Algorithmic Accountability Reporting: On the Investigation of Black Boxes. [https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2 Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Algorithmic+Accountability+Reporting%3A+On+the+Investigation+of+Black+Boxes&btnG= 14] [https://www.youtube.com/watch?v=WWbw4cla2jw RB1].
+
* Algorithmic Accountability Reporting: On the Investigation of Black Boxes. [https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2 Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Algorithmic+Accountability+Reporting%3A+On+the+Investigation+of+Black+Boxes&btnG= 14] [https://www.youtube.com/watch?v=WWbw4cla2jw RB1].
Efficient and Thrifty Voting by Any Means Necessary, NeurIPS. [http://papers.nips.cc/paper/8939-efficient-and-thrifty-voting-by-any-means-necessary.pdf MPSW][https://dblp.org/rec/bibtex/conf/nips/MandalP0W19 19].
+
* Efficient and Thrifty Voting by Any Means Necessary, NeurIPS. [http://papers.nips.cc/paper/8939-efficient-and-thrifty-voting-by-any-means-necessary.pdf MPSW][https://dblp.org/rec/bibtex/conf/nips/MandalP0W19 19].
The Vulnerable World Hypothesis, Global Policy. [https://nickbostrom.com/papers/vulnerable.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Vulnerable+World+Hypothesis&btnG= 19].
+
* The Vulnerable World Hypothesis, Global Policy. [https://nickbostrom.com/papers/vulnerable.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Vulnerable+World+Hypothesis&btnG= 19].
Occam's razor is insufficient to infer the preferences of irrational agents, NeurIPS. [https://arxiv.org/pdf/1712.05812 ArmstrongMindermann][https://dblp.org/rec/bibtex/conf/nips/ArmstrongM18 18].
+
* Occam's razor is insufficient to infer the preferences of irrational agents, NeurIPS. [https://arxiv.org/pdf/1712.05812 ArmstrongMindermann][https://dblp.org/rec/bibtex/conf/nips/ArmstrongM18 18].
Supervising strong learners by amplifying weak experts. [https://arxiv.org/pdf/1810.08575 CSA][https://dblp.org/rec/bibtex/journals/corr/abs-1810-08575 18].
+
* Supervising strong learners by amplifying weak experts. [https://arxiv.org/pdf/1810.08575 CSA][https://dblp.org/rec/bibtex/journals/corr/abs-1810-08575 18].
Embedded Agency. [https://arxiv.org/pdf/1902.09469.pdf DemskiGarrabrant][https://dblp.org/rec/bibtex/journals/corr/abs-1902-09469 19].
+
* Embedded Agency. [https://arxiv.org/pdf/1902.09469.pdf DemskiGarrabrant][https://dblp.org/rec/bibtex/journals/corr/abs-1902-09469 19].
Concrete Problems in AI Safety. [https://arxiv.org/pdf/1606.06565 AOSCSM][https://dblp.org/rec/bibtex/journals/corr/AmodeiOSCSM16 16].
+
* Concrete Problems in AI Safety. [https://arxiv.org/pdf/1606.06565 AOSCSM][https://dblp.org/rec/bibtex/journals/corr/AmodeiOSCSM16 16].
The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines. [https://www.nickbostrom.com/superintelligentwill.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=THE+SUPERINTELLIGENT+WILL%3A+MOTIVATION+AND+INSTRUMENTAL+RATIONALITY+IN+ADVANCED+ARTIFICIAL+AGENTS&btnG= 12].
+
* The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines. [https://www.nickbostrom.com/superintelligentwill.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=THE+SUPERINTELLIGENT+WILL%3A+MOTIVATION+AND+INSTRUMENTAL+RATIONALITY+IN+ADVANCED+ARTIFICIAL+AGENTS&btnG= 12].
On the Limits of Recursively Self-Improving AGI, AGI. [https://link.springer.com/content/pdf/10.1007%2F978-3-319-21365-1.pdf Yampolski][https://dblp.org/rec/bibtex/conf/agi/Yampolskiy15b 15].
+
* On the Limits of Recursively Self-Improving AGI, AGI. [https://link.springer.com/content/pdf/10.1007%2F978-3-319-21365-1.pdf Yampolski][https://dblp.org/rec/bibtex/conf/agi/Yampolskiy15b 15].
Can Intelligence Explode? [http://www.hutter1.net/publ/singularity.pdf Hutter][https://dblp.org/rec/bibtex/journals/corr/abs-1202-6177 12].
+
* Can Intelligence Explode? [http://www.hutter1.net/publ/singularity.pdf Hutter][https://dblp.org/rec/bibtex/journals/corr/abs-1202-6177 12].
Risks from Learned Optimization in Advanced Machine Learning Systems. [https://arxiv.org/pdf/1906.01820.pdf HMMSG][https://dblp.org/rec/bibtex/journals/corr/abs-1906-01820 19].
+
* Risks from Learned Optimization in Advanced Machine Learning Systems. [https://arxiv.org/pdf/1906.01820.pdf HMMSG][https://dblp.org/rec/bibtex/journals/corr/abs-1906-01820 19].
The Value Learning Problem, IJCAI. [https://intelligence.org/files/ValueLearningProblem.pdf Soares][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Value+Learning+Problem+soares&btnG= 16].
+
* The Value Learning Problem, IJCAI. [https://intelligence.org/files/ValueLearningProblem.pdf Soares][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Value+Learning+Problem+soares&btnG= 16].
  
 
== Candidate future papers ==
 
== Candidate future papers ==

Revision as of 10:29, 2 February 2020

The Robustly Beneficial group is an AI ethics group, started by Louis Faucon and Sergei Volodin, in Lausanne, Switzerland. The group is now managed by Louis Faucon, El Mhamdi El Mhamdi and Lê Nguyên Hoang. Every week, we discuss a paper relevant to AI ethics. Please feel free to ask to join.

Past papers

  • Experimental evidence of massive-scale emotional contagion through social networks. KGH14 RB3.
  • Recent Advances in Algorithmic High-Dimensional Robust Statistics. DiakonikolasKane19 RB2.
  • Algorithmic Accountability Reporting: On the Investigation of Black Boxes. Diakopoulos14 RB1.
  • Efficient and Thrifty Voting by Any Means Necessary, NeurIPS. MPSW19.
  • The Vulnerable World Hypothesis, Global Policy. Bostrom19.
  • Occam's razor is insufficient to infer the preferences of irrational agents, NeurIPS. ArmstrongMindermann18.
  • Supervising strong learners by amplifying weak experts. CSA18.
  • Embedded Agency. DemskiGarrabrant19.
  • Concrete Problems in AI Safety. AOSCSM16.
  • The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines. Bostrom12.
  • On the Limits of Recursively Self-Improving AGI, AGI. Yampolski15.
  • Can Intelligence Explode? Hutter12.
  • Risks from Learned Optimization in Advanced Machine Learning Systems. HMMSG19.
  • The Value Learning Problem, IJCAI. Soares16.

Candidate future papers

Why Philosophers Should Care About Computational Complexity, ECCC. Aaronson11. Facebook language predicts depression in medical records, PNAS. ESMUC+18. WeBuildAI: Participatory Framework for Algorithmic Governance, PACMHCI. LKKKY+19. Exposure to opposing views on social media can increase political polarization, PNAS. BABBC+18. Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges, Statistical science: a review journal of the Institute of Mathematical Statistics. VBW15. The complexity of agreement, STOC. [1]05. Reward Tampering Problems and Solutions in Reinforcement Learning. EverittHutter19. AGI safety literature review, IJCAI. ELH[2]. The global landscape of AI ethics guidelines, Nature. JIV19. Tackling climate change with machine learning. RDKKL+19.