User contributions
From RB Wiki
- 20:52, 10 March 2020 diff hist +248 m Robustly Beneficial group →Past papers current
- 20:46, 10 March 2020 diff hist 0 m Robustly Beneficial group
- 10:58, 4 March 2020 diff hist +24 m ABCDE roadmap →Motivation and justification current
- 11:44, 2 March 2020 diff hist +421 m Transformer current
- 11:41, 2 March 2020 diff hist +101 N Transformer Created page with "Transformers are learning models that exhibit impressive performances at natural language processing."
- 11:40, 2 March 2020 diff hist +12 m Knowledge representation →Transformers current
- 11:40, 2 March 2020 diff hist +353 N Knowledge representation Created page with "Knowledge representation is the problem of encoding algorithmically global and common-sense knowledge about the world. == Knowledge graph == == Transformers == [https://www..."
- 11:37, 2 March 2020 diff hist +32 m Welcome to the Robustly Beneficial Wiki →How to solve AI ethics (hopefully) current
- 19:09, 26 February 2020 diff hist +165 m Overfitting →Details current
- 19:07, 26 February 2020 diff hist +17 m Overfitting →Details
- 19:03, 26 February 2020 diff hist +309 m Overfitting →Details
- 18:58, 26 February 2020 diff hist +848 m Overfitting →Details
- 18:56, 26 February 2020 diff hist -676 m Overfitting →Double descent
- 18:55, 26 February 2020 diff hist +835 m Overfitting →Double descent
- 09:00, 26 February 2020 diff hist +16 m Robust statistics →Robustness to additive poisoning current
- 08:59, 26 February 2020 diff hist +1 m Robust statistics →Robustness to additive poisoning
- 08:58, 26 February 2020 diff hist +647 m Robust statistics
- 20:38, 25 February 2020 diff hist +228 N Reinforcement learning Created page with "Reinforcement learning is a general framework for sequential decision-making. == MuZero == [https://arxiv.org/pdf/1911.08265.pdf SAHSS+][https://dblp.org/rec/bibtex/journal..." current
- 11:54, 24 February 2020 diff hist +780 N Distributional shift Created page with "Distributional shift is the problem of achieving good performances despite a change in the data distribution. Typically if an algorithm learns from a distribution <math>\mathc..." current
- 11:54, 24 February 2020 diff hist +389 m Overfitting
- 11:34, 23 February 2020 diff hist -8 m Social choice →Harsanyi's Utilitarian Theorem current
- 11:34, 23 February 2020 diff hist -1 m Social choice →Harsanyi's Utilitarian Theorem
- 11:32, 23 February 2020 diff hist -20 m Social choice →Harsanyi's Utilitarian Theorem
- 00:49, 23 February 2020 diff hist 0 m Social choice →Harsanyi's Utilitarian Theorem
- 00:48, 23 February 2020 diff hist +764 m Social choice
- 00:36, 23 February 2020 diff hist +33 m Von Neumann-Morgenstern preferences →Formal theorem current
- 00:36, 23 February 2020 diff hist +2,310 N Von Neumann-Morgenstern preferences Created page with "A Von Neumann-Morgenstern preference [https://pdfs.semanticscholar.org/0375/379194a6f34b818962ea947bff153adf621c.pdf VonneumannMorgensternBook][https://scholar.google.ch/schol..."
- 00:34, 23 February 2020 diff hist +50 m Preference learning from comparisons current
- 00:33, 23 February 2020 diff hist +4 m Welcome to the Robustly Beneficial Wiki →Why AI safety and ethics is harder than meets the eye
- 00:22, 23 February 2020 diff hist +1,278 m Social choice
- 15:23, 21 February 2020 diff hist +35 m Robustly Beneficial group →Past papers
- 15:21, 21 February 2020 diff hist +255 m Robustly Beneficial group →Past papers
- 19:32, 20 February 2020 diff hist +210 m Robustly Beneficial group →Candidate future papers
- 23:10, 17 February 2020 diff hist +71 m Impressive advances in AI →Image processing current
- 23:07, 17 February 2020 diff hist +79 m Impressive advances in AI →Image processing
- 10:05, 17 February 2020 diff hist +269 m Robustly Beneficial group
- 18:08, 15 February 2020 diff hist +262 m Impressive advances in AI
- 12:20, 15 February 2020 diff hist +428 m Robustly Beneficial group
- 08:54, 14 February 2020 diff hist +51 m Robustly Beneficial group →Past papers
- 08:54, 14 February 2020 diff hist +218 m Robustly Beneficial group
- 09:27, 13 February 2020 diff hist +194 m Welcome to the Robustly Beneficial Wiki →Why AI ethics is becoming critical
- 18:55, 11 February 2020 diff hist +2 m Social choice →Applications to AI Ethics
- 17:50, 11 February 2020 diff hist +173 m Convexity →Neural networks and convexity current
- 10:50, 11 February 2020 diff hist +206 m Robust statistics →Poisoning models
- 10:10, 11 February 2020 diff hist +2,014 N Convexity Created page with "A convex set is one such that the segment between any two points of the set still belongs to the set. In other words, if <math>x,y \in S</math> and <math>\lambda \in [0,1]</ma..."
- 20:25, 6 February 2020 diff hist +550 m Mental health →Impact current
- 13:42, 5 February 2020 diff hist +334 m Adversarial attacks →Evasion attacks current
- 18:03, 4 February 2020 diff hist +58 m Church-Turing thesis current
- 16:21, 4 February 2020 diff hist +401 m Preference learning from comparisons →Gaussian process
- 15:02, 4 February 2020 diff hist +193 m Preference learning from comparisons →Classical models
- 09:44, 4 February 2020 diff hist -24 m Bayesianism current
- 09:44, 4 February 2020 diff hist +93 m User:Lê Nguyên Hoang current
- 09:42, 4 February 2020 diff hist +2,889 m Laplace 1814 →Laplace's philosophy of probability current
- 21:37, 3 February 2020 diff hist 0 m Laplace 1814 →Laplace's philosophy of probability
- 18:03, 3 February 2020 diff hist +3 m Laplace 1814 →Laplace's philosophy of probability
- 16:32, 3 February 2020 diff hist +13 m Bayesianism
- 15:53, 3 February 2020 diff hist +99 m Laplace 1814
- 15:51, 3 February 2020 diff hist -5 m Bayesianism
- 15:50, 3 February 2020 diff hist +45 m Bayesianism
- 15:49, 3 February 2020 diff hist +16 m Turing 1950 →Final remarks current
- 15:32, 3 February 2020 diff hist -2 m Laplace 1814 →Laplace's philosophy of probability
- 15:32, 3 February 2020 diff hist +7,679 N Laplace 1814 Created page with "In 1814, Pierre-Simon Laplace published "An Philosophical Essay on Probabilities" [https://play.google.com/books/reader?id=rDUJAAAAIAAJ&hl=en&pg=GBS.PA3.w.1.0.0 Laplace1814] [..."
- 14:07, 3 February 2020 diff hist +18 m Welcome to the Robustly Beneficial Wiki →How today's (and probably tomorrow's) AIs work
- 10:25, 3 February 2020 diff hist +116 m Preference learning from comparisons →Gaussian process
- 10:03, 3 February 2020 diff hist +1,484 m Preference learning from comparisons →Gaussian process
- 09:46, 3 February 2020 diff hist +17 m Preference learning from comparisons →Classical models
- 09:45, 3 February 2020 diff hist +41 m Preference learning from comparisons →Classical models
- 09:45, 3 February 2020 diff hist +560 m Preference learning from comparisons →Classical models
- 09:45, 3 February 2020 diff hist +181 m Preference learning from comparisons →Markov chain for preference learning
- 09:32, 3 February 2020 diff hist +879 m Preference learning from comparisons →Markov chain for preference learning
- 09:22, 3 February 2020 diff hist +2 m Preference learning from comparisons →Classical models
- 09:08, 3 February 2020 diff hist +10 m Preference learning from comparisons →Classical models
- 09:07, 3 February 2020 diff hist +28 m Preference learning from comparisons →Classical models
- 09:06, 3 February 2020 diff hist +104 m Preference learning from comparisons →Classical models
- 09:05, 3 February 2020 diff hist +4,566 N Preference learning from comparisons Created page with "It has been argued that we humans are much more effective at comparing alternatives than at scoring them [https://infoscience.epfl.ch/record/255399/files/EPFL_TH8637.pdf Mayst..."
- 08:18, 3 February 2020 diff hist +66 m Welcome to the Robustly Beneficial Wiki →How to solve AI ethics (hopefully)
- 22:29, 2 February 2020 diff hist +6 m Robustly Beneficial group →Past papers
- 10:59, 2 February 2020 diff hist -1 m Robustly Beneficial group
- 10:40, 2 February 2020 diff hist +225 m Robustly Beneficial group →Candidate future papers
- 10:37, 2 February 2020 diff hist +458 m Robustly Beneficial group →Candidate future papers
- 10:36, 2 February 2020 diff hist +50 m YouTube →Effects current
- 10:35, 2 February 2020 diff hist +99 m AI opportunities →Healthcare current
- 10:34, 2 February 2020 diff hist +143 m Mental health →Impact
- 10:33, 2 February 2020 diff hist +85 m Robust statistics
- 10:32, 2 February 2020 diff hist +50 m Adversarial attacks →Poisoning attacks
- 10:32, 2 February 2020 diff hist +50 m Interpretability →Black box current
- 10:30, 2 February 2020 diff hist +2 m Robustly Beneficial group →Candidate future papers
- 10:30, 2 February 2020 diff hist +20 m Robustly Beneficial group →Candidate future papers
- 10:29, 2 February 2020 diff hist +28 m Robustly Beneficial group →Past papers
- 10:28, 2 February 2020 diff hist +2,291 m Robustly Beneficial group →Past papers
- 10:19, 2 February 2020 diff hist +3,494 N Robustly Beneficial group Created page with "The Robustly Beneficial group is an AI ethics group, started by Louis Faucon and Sergei Volodin, in Lausanne, Switzerland. The group is now managed by ..."
- 09:46, 2 February 2020 diff hist -56 m Welcome to the Robustly Beneficial Wiki →About the authors
- 13:38, 1 February 2020 diff hist -29 m Welcome to the Robustly Beneficial Wiki
- 21:30, 30 January 2020 diff hist +266 m Mental health →Impact
- 08:50, 30 January 2020 diff hist +179 m Welcome to the Robustly Beneficial Wiki
- 15:01, 29 January 2020 diff hist +383 N AI governance Created page with "AI governance is the problem of understanding what forces act on the development of AIs, and what ought to be done to aligns such efforts with ethical values. == Guidelines =..." current
- 17:58, 28 January 2020 diff hist +98 m Stochastic gradient descent current
- 17:56, 28 January 2020 diff hist +2,179 N Stochastic gradient descent Created page with "Stochastic gradient descent (SGD) is the most widely used learning algorithm. For a very general perspective, SGD consists in iterating (1) draw some data point and (2) slight..."
- 15:53, 28 January 2020 diff hist +46 m Human liabilities current
- 15:52, 28 January 2020 diff hist +172 N Human liabilities Created page with "The safety of algorithms can be limited by the liabilities of humans in charge of the algorithms. Below we list such possible liabilities. == Phishing == == Ransomware =="