Difference between revisions of "Robustly Beneficial group"

From RB Wiki
(Created page with "The Robustly Beneficial group is an AI ethics group, started by Louis Faucon and Sergei Volodin, in Lausanne, Switzerland. The group is now managed by ...")
 
Line 4: Line 4:
  
 
Experimental evidence of massive-scale emotional contagion through social networks. [https://www.pnas.org/content/pnas/111/24/8788.full.pdf KGH][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Experimental+evidence+of+massive-scale+emotional+contagion+through+social+networks&btnG= 14] [https://www.youtube.com/watch?v=gQHvTow91FY RB3].
 
Experimental evidence of massive-scale emotional contagion through social networks. [https://www.pnas.org/content/pnas/111/24/8788.full.pdf KGH][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Experimental+evidence+of+massive-scale+emotional+contagion+through+social+networks&btnG= 14] [https://www.youtube.com/watch?v=gQHvTow91FY RB3].
 
 
Recent Advances in Algorithmic High-Dimensional Robust Statistics. [https://arxiv.org/pdf/1911.05911 DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19] [https://www.youtube.com/watch?v=QguWgfGsG-k RB2].
 
Recent Advances in Algorithmic High-Dimensional Robust Statistics. [https://arxiv.org/pdf/1911.05911 DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19] [https://www.youtube.com/watch?v=QguWgfGsG-k RB2].
 
 
Algorithmic Accountability Reporting: On the Investigation of Black Boxes. [https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2 Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Algorithmic+Accountability+Reporting%3A+On+the+Investigation+of+Black+Boxes&btnG= 14] [https://www.youtube.com/watch?v=WWbw4cla2jw RB1].
 
Algorithmic Accountability Reporting: On the Investigation of Black Boxes. [https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2 Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Algorithmic+Accountability+Reporting%3A+On+the+Investigation+of+Black+Boxes&btnG= 14] [https://www.youtube.com/watch?v=WWbw4cla2jw RB1].
 
 
Efficient and Thrifty Voting by Any Means Necessary, NeurIPS. [http://papers.nips.cc/paper/8939-efficient-and-thrifty-voting-by-any-means-necessary.pdf MPSW][https://dblp.org/rec/bibtex/conf/nips/MandalP0W19 19].
 
Efficient and Thrifty Voting by Any Means Necessary, NeurIPS. [http://papers.nips.cc/paper/8939-efficient-and-thrifty-voting-by-any-means-necessary.pdf MPSW][https://dblp.org/rec/bibtex/conf/nips/MandalP0W19 19].
 
 
The Vulnerable World Hypothesis, Global Policy. [https://nickbostrom.com/papers/vulnerable.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Vulnerable+World+Hypothesis&btnG= 19].
 
The Vulnerable World Hypothesis, Global Policy. [https://nickbostrom.com/papers/vulnerable.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Vulnerable+World+Hypothesis&btnG= 19].
 
 
Occam's razor is insufficient to infer the preferences of irrational agents, NeurIPS. [https://arxiv.org/pdf/1712.05812 ArmstrongMindermann][https://dblp.org/rec/bibtex/conf/nips/ArmstrongM18 18].
 
Occam's razor is insufficient to infer the preferences of irrational agents, NeurIPS. [https://arxiv.org/pdf/1712.05812 ArmstrongMindermann][https://dblp.org/rec/bibtex/conf/nips/ArmstrongM18 18].
 
 
Supervising strong learners by amplifying weak experts. [https://arxiv.org/pdf/1810.08575 CSA][https://dblp.org/rec/bibtex/journals/corr/abs-1810-08575 18].
 
Supervising strong learners by amplifying weak experts. [https://arxiv.org/pdf/1810.08575 CSA][https://dblp.org/rec/bibtex/journals/corr/abs-1810-08575 18].
 
 
Embedded Agency. [https://arxiv.org/pdf/1902.09469.pdf DemskiGarrabrant][https://dblp.org/rec/bibtex/journals/corr/abs-1902-09469 19].
 
Embedded Agency. [https://arxiv.org/pdf/1902.09469.pdf DemskiGarrabrant][https://dblp.org/rec/bibtex/journals/corr/abs-1902-09469 19].
 
 
Concrete Problems in AI Safety. [https://arxiv.org/pdf/1606.06565 AOSCSM][https://dblp.org/rec/bibtex/journals/corr/AmodeiOSCSM16 16].
 
Concrete Problems in AI Safety. [https://arxiv.org/pdf/1606.06565 AOSCSM][https://dblp.org/rec/bibtex/journals/corr/AmodeiOSCSM16 16].
 
 
The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines. [https://www.nickbostrom.com/superintelligentwill.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=THE+SUPERINTELLIGENT+WILL%3A+MOTIVATION+AND+INSTRUMENTAL+RATIONALITY+IN+ADVANCED+ARTIFICIAL+AGENTS&btnG= 12].
 
The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines. [https://www.nickbostrom.com/superintelligentwill.pdf Bostrom][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=THE+SUPERINTELLIGENT+WILL%3A+MOTIVATION+AND+INSTRUMENTAL+RATIONALITY+IN+ADVANCED+ARTIFICIAL+AGENTS&btnG= 12].
 
 
On the Limits of Recursively Self-Improving AGI, AGI. [https://link.springer.com/content/pdf/10.1007%2F978-3-319-21365-1.pdf Yampolski][https://dblp.org/rec/bibtex/conf/agi/Yampolskiy15b 15].
 
On the Limits of Recursively Self-Improving AGI, AGI. [https://link.springer.com/content/pdf/10.1007%2F978-3-319-21365-1.pdf Yampolski][https://dblp.org/rec/bibtex/conf/agi/Yampolskiy15b 15].
 
 
Can Intelligence Explode? [http://www.hutter1.net/publ/singularity.pdf Hutter][https://dblp.org/rec/bibtex/journals/corr/abs-1202-6177 12].
 
Can Intelligence Explode? [http://www.hutter1.net/publ/singularity.pdf Hutter][https://dblp.org/rec/bibtex/journals/corr/abs-1202-6177 12].
 +
Risks from Learned Optimization in Advanced Machine Learning Systems. [https://arxiv.org/pdf/1906.01820.pdf HMMSG][https://dblp.org/rec/bibtex/journals/corr/abs-1906-01820 19].
 +
The Value Learning Problem, IJCAI. [https://intelligence.org/files/ValueLearningProblem.pdf Soares][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Value+Learning+Problem+soares&btnG= 16].
  
Risks from Learned Optimization in Advanced Machine Learning Systems. [https://arxiv.org/pdf/1906.01820.pdf HMMSG][https://dblp.org/rec/bibtex/journals/corr/abs-1906-01820 19].
+
== Candidate future papers ==
  
The Value Learning Problem, IJCAI. [https://intelligence.org/files/ValueLearningProblem.pdf Soares][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+Value+Learning+Problem+soares&btnG= 16].
+
Why Philosophers Should Care About Computational Complexity, ECCC. [https://www.scottaaronson.com/papers/philos.pdf Aaronson][https://dblp.org/rec/bibtex/journals/eccc/Aaronson11b 11].
 +
Facebook language predicts depression in medical records, PNAS. [https://www.pnas.org/content/pnas/115/44/11203.full.pdf ESMUC+][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Facebook+language+predicts+depression+in+medical+records&btnG= 18].
 +
WeBuildAI: Participatory Framework for Algorithmic Governance, PACMHCI. [https://www.cs.cmu.edu/~akahng/papers/webuildai.pdf LKKKY+][https://dblp.org/rec/bibtex/journals/pacmhci/LeeKKKYCSNLPP19 19].
 +
Exposure to opposing views on social media can increase political polarization, PNAS.  [https://www.pnas.org/content/pnas/115/37/9216.full.pdf BABBC+][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Exposure+to+opposing+views+on+social+media+can+increase+political+polarization&btnG= 18].
 +
Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges, Statistical science: a review journal of the Institute of Mathematical Statistics. [https://arxiv.org/pdf/1507.08025.pdf VBW][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Multi-armed+Bandit+Models+for+the+Optimal+Design+of+Clinical+Trials%3A+Benefits+and+Challenges&btnG= 15].
 +
The complexity of agreement, STOC. [https://dl.acm.org/doi/pdf/10.1145/1060590.1060686Aaronson][https://dblp.org/rec/bibtex/conf/stoc/Aaronson05 05].
 +
Reward Tampering Problems and Solutions in Reinforcement Learning. [https://arxiv.org/pdf/1908.04734.pdf EverittHutter][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Reward+Tampering+Problems+and+Solutions+in+Reinforcement+Learning&btnG= 19].
 +
AGI safety literature review, IJCAI. [https://arxiv.org/pdf/1805.01109 ELH][https://dblp.org/rec/bibtex/conf/ijcai/EverittLH1818].
 +
The global landscape of AI ethics guidelines, Nature. [https://www.nature.com/articles/s42256-019-0088-2 JIV][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=The+global+landscape+of+AI+ethics+guidelines&btnG= 19].
 +
Tackling climate change with machine learning. [https://arxiv.org/pdf/1906.05433.pdf RDKKL+][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Tackling+climate+change+with+machine+learning&btnG= 19].

Revision as of 10:28, 2 February 2020

The Robustly Beneficial group is an AI ethics group, started by Louis Faucon and Sergei Volodin, in Lausanne, Switzerland. The group is now managed by Louis Faucon, El Mhamdi El Mhamdi and Lê Nguyên Hoang. Every week, we discuss a paper relevant to AI ethics. Please feel free to ask to join.

Past papers

Experimental evidence of massive-scale emotional contagion through social networks. KGH14 RB3. Recent Advances in Algorithmic High-Dimensional Robust Statistics. DiakonikolasKane19 RB2. Algorithmic Accountability Reporting: On the Investigation of Black Boxes. Diakopoulos14 RB1. Efficient and Thrifty Voting by Any Means Necessary, NeurIPS. MPSW19. The Vulnerable World Hypothesis, Global Policy. Bostrom19. Occam's razor is insufficient to infer the preferences of irrational agents, NeurIPS. ArmstrongMindermann18. Supervising strong learners by amplifying weak experts. CSA18. Embedded Agency. DemskiGarrabrant19. Concrete Problems in AI Safety. AOSCSM16. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines. Bostrom12. On the Limits of Recursively Self-Improving AGI, AGI. Yampolski15. Can Intelligence Explode? Hutter12. Risks from Learned Optimization in Advanced Machine Learning Systems. HMMSG19. The Value Learning Problem, IJCAI. Soares16.

Candidate future papers

Why Philosophers Should Care About Computational Complexity, ECCC. Aaronson11. Facebook language predicts depression in medical records, PNAS. ESMUC+18. WeBuildAI: Participatory Framework for Algorithmic Governance, PACMHCI. LKKKY+19. Exposure to opposing views on social media can increase political polarization, PNAS. BABBC+18. Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges, Statistical science: a review journal of the Institute of Mathematical Statistics. VBW15. The complexity of agreement, STOC. [1]05. Reward Tampering Problems and Solutions in Reinforcement Learning. EverittHutter19. AGI safety literature review, IJCAI. ELH[2]. The global landscape of AI ethics guidelines, Nature. JIV19. Tackling climate change with machine learning. RDKKL+19.