Difference between revisions of "Interpretability"
(Created page with "As algorithms become more influential and complex, their interpretability becomes a major challenge. Unfortunately, it is not always clear what is meant by "interpretability"....") |
m (→Black box) |
||
(4 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
Arguably, interpretability should not be regarded as a binary concept. Algorithms are more or less interpretable. | Arguably, interpretability should not be regarded as a binary concept. Algorithms are more or less interpretable. | ||
− | In practice, it seems that the more critical question is to which extent we can predict the behavior of an algorithm in different relevant contexts. In particular, what can we guarantee about the algorithm's behavior? What can we be reasonably confident about? What should we be suspicious of? | + | In practice, it seems that the more critical question is to which extent we can predict the behavior of an algorithm in different relevant contexts. In particular, what can we guarantee about the algorithm's behavior? What can we be reasonably confident about? How to identify what ought to be changed about the algorithm? What should we be suspicious of? |
To answer these questions as best as possible, it is often useful to arm ourselves with <em>probing algorithms</em>, that is, algorithms designed to probe the algorithms that need to be interpreted. | To answer these questions as best as possible, it is often useful to arm ourselves with <em>probing algorithms</em>, that is, algorithms designed to probe the algorithms that need to be interpreted. | ||
Line 19: | Line 19: | ||
== Black box == | == Black box == | ||
− | [https://academiccommons.columbia.edu/doi/10.7916/D8TT536K/download Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=ALGORITHMIC+ACCOUNTABILITY+REPORTING%3A+ON+THE+INVESTIGATION+OF+BLACK+BOXES&btnG= 14] provides two very interesting definitions of a "black box", which we shall call <em>weak black box</em>, <em>strong black box</em> and <em>total black box</em>. A <em>weak black box</em> is an algorithm that you can only probe through input/output queries. In the case of a <em>strong black box</em>, you cannot choose the input, though you can observe it, as well as the output. Finally, a <em>total black box</em> would be an algorithm whose inputs are unknown; only outputs are observable. Evidently, this classification of black boxes is a simplification; rather, algorithms are more or less black, because inputs are more or less controllable and observable. | + | [https://academiccommons.columbia.edu/doi/10.7916/D8TT536K/download Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=ALGORITHMIC+ACCOUNTABILITY+REPORTING%3A+ON+THE+INVESTIGATION+OF+BLACK+BOXES&btnG= 14] [https://www.youtube.com/watch?v=WWbw4cla2jw RB1] provides two very interesting definitions of a "black box", which we shall call <em>weak black box</em>, <em>strong black box</em> and <em>total black box</em>. A <em>weak black box</em> is an algorithm that you can only probe through input/output queries. In the case of a <em>strong black box</em>, you cannot choose the input, though you can observe it, as well as the output. Finally, a <em>total black box</em> would be an algorithm whose inputs are unknown; only outputs are observable. Evidently, this classification of black boxes is a simplification; rather, algorithms are more or less black, because inputs are more or less controllable and observable. |
Importantly, in this sense, a "black box" is not an "unknowable algorithm". Rather, it is an algorithm whose properties can essentially only be probed in an input-output manner. The reason why the distinctions above seem useful is that the "blackness" of an algorithm limits what <em>probing algorithms</em> can or should be used. | Importantly, in this sense, a "black box" is not an "unknowable algorithm". Rather, it is an algorithm whose properties can essentially only be probed in an input-output manner. The reason why the distinctions above seem useful is that the "blackness" of an algorithm limits what <em>probing algorithms</em> can or should be used. | ||
Line 29: | Line 29: | ||
Interestingly, [https://academiccommons.columbia.edu/doi/10.7916/D8TT536K/download Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=ALGORITHMIC+ACCOUNTABILITY+REPORTING%3A+ON+THE+INVESTIGATION+OF+BLACK+BOXES&btnG= 14] shows that, in most cases, algorithms need to be treated as black boxes, either because their code is private or because it is too complex (especially if it involves neural networks). Nevertheless, Diakopoulos shows that by interacting with the black box through queries and analyzing outputs, some partial interpretability can sometimes be achieved. Note that, in the end, this is very similar to how we interpret humans' brains. | Interestingly, [https://academiccommons.columbia.edu/doi/10.7916/D8TT536K/download Diakopoulos][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=ALGORITHMIC+ACCOUNTABILITY+REPORTING%3A+ON+THE+INVESTIGATION+OF+BLACK+BOXES&btnG= 14] shows that, in most cases, algorithms need to be treated as black boxes, either because their code is private or because it is too complex (especially if it involves neural networks). Nevertheless, Diakopoulos shows that by interacting with the black box through queries and analyzing outputs, some partial interpretability can sometimes be achieved. Note that, in the end, this is very similar to how we interpret humans' brains. | ||
− | [https://arxiv.org/pdf/1908.08313 ROWAM][https://dblp.org/rec/bibtex/ | + | [https://arxiv.org/pdf/1908.08313 ROWAM][https://dblp.org/rec/bibtex/conf/fat/RibeiroO0AM20 20] show that even strong black boxes can be probed through indirect means, for instance by analyzing data from users who interacted with the black box. In particular, a probing algorithm may not need to analyze the outputs |
== Interpreting the YouTube algorithm == | == Interpreting the YouTube algorithm == | ||
− | A few papers tried to probe the YouTube recommendation algorithms with probing algorithms. Former Google employee Guillaume Chaslot launched [https://www.algotransparency.org Algotransparency], which creates YouTube accounts run by bots that randomly click on suggested videos. [https://arxiv.org/pdf/1602.02692 Allgaier][https://dblp.org/rec/bibtex/journals/corr/Allgaier16 16] used similar techniques. Both found worrying pipelines towards conspiracy or climate denialist videos. This was indirectly confirmed by [https://arxiv.org/pdf/1908.08313 ROWAM][https://dblp.org/rec/bibtex/ | + | A few papers tried to probe the YouTube recommendation algorithms with probing algorithms. Former Google employee Guillaume Chaslot launched [https://www.algotransparency.org Algotransparency], which creates YouTube accounts run by bots that randomly click on suggested videos. [https://arxiv.org/pdf/1602.02692 Allgaier][https://dblp.org/rec/bibtex/journals/corr/Allgaier16 16] used similar techniques. Both found worrying pipelines towards conspiracy or climate denialist videos. This was indirectly confirmed by [https://arxiv.org/pdf/1908.08313 ROWAM][https://dblp.org/rec/bibtex/conf/fat/RibeiroO0AM20 20]. |
== Neural network activations == | == Neural network activations == | ||
If we have access to the code of a neural network, additional interpretability can be achieved by analyzing the activations of the neural network for different stimuli, as is done through magnetic resonance imaging for humans. | If we have access to the code of a neural network, additional interpretability can be achieved by analyzing the activations of the neural network for different stimuli, as is done through magnetic resonance imaging for humans. | ||
+ | |||
+ | [https://dl.acm.org/doi/pdf/10.1145/2939672.2939778 ROG][https://dblp.org/rec/bibtex/conf/kdd/Ribeiro0G16 16] | ||
+ | |||
+ | [http://papers.nips.cc/paper/8307-adversarial-examples-are-not-bugs-they-are-features.pdf ISTETM][https://dblp.org/rec/bibtex/conf/nips/IlyasSTETM19 19] |
Latest revision as of 09:32, 2 February 2020
As algorithms become more influential and complex, their interpretability becomes a major challenge. Unfortunately, it is not always clear what is meant by "interpretability". Amusingly, the concept of interpretability needs interpretation.
Contents
What to expect from interpretability?
Arguably, interpretability should not be regarded as a binary concept. Algorithms are more or less interpretable.
In practice, it seems that the more critical question is to which extent we can predict the behavior of an algorithm in different relevant contexts. In particular, what can we guarantee about the algorithm's behavior? What can we be reasonably confident about? How to identify what ought to be changed about the algorithm? What should we be suspicious of?
To answer these questions as best as possible, it is often useful to arm ourselves with probing algorithms, that is, algorithms designed to probe the algorithms that need to be interpreted.
Provable guarantees
Evidently, the best form of interpretability is proving theoretical guarantees. This is essentially the heart of theoretical computer science. An example is the incentive-compatibility and stable matching of the Gale-Shapley algorithm.
The search for theoretical guarantees can be automated by program verification. Some research aims to apply similar techniques to neural network verification, typically by involving mixed integer programming TXT19.
Theoretical guarantees also play an important role in studying the learning of an algorithm. Typically, one could prove theoretical convergence of stochastic gradient descent under the assumption of the convexity of the loss function. Another example is the study of robustness to adversarial attacks BMGS17.
Black box
Diakopoulos14 RB1 provides two very interesting definitions of a "black box", which we shall call weak black box, strong black box and total black box. A weak black box is an algorithm that you can only probe through input/output queries. In the case of a strong black box, you cannot choose the input, though you can observe it, as well as the output. Finally, a total black box would be an algorithm whose inputs are unknown; only outputs are observable. Evidently, this classification of black boxes is a simplification; rather, algorithms are more or less black, because inputs are more or less controllable and observable.
Importantly, in this sense, a "black box" is not an "unknowable algorithm". Rather, it is an algorithm whose properties can essentially only be probed in an input-output manner. The reason why the distinctions above seem useful is that the "blackness" of an algorithm limits what probing algorithms can or should be used.
Lê is conjecturing here that weak black boxes can be "fully probed" by O(K) queries, assuming that the algorithm has Solomonoff-Kolmogorov complexity K. But total black boxes are harder to probe, especially if the black box inputs data whose complexity is not bounded. Exciting research direction!
Interestingly, in this sense, humans can be argued to be mostly black boxes. But not completely. Indeed, you can also probe humans' algorithms by scanning brain activity with MRIs. The same holds for artificial neural networks, for which additional data can be collected by studying neural activity. In fact, artificial neural networks are "less black" than human brains, not only because neural activity can be probed more precisely, but also because neural networks can be probed in other ways, for instance by analyzing the gradients of the neural network for some data (which turns out to also increase vulnerability to adversarial attacks).
Interestingly, Diakopoulos14 shows that, in most cases, algorithms need to be treated as black boxes, either because their code is private or because it is too complex (especially if it involves neural networks). Nevertheless, Diakopoulos shows that by interacting with the black box through queries and analyzing outputs, some partial interpretability can sometimes be achieved. Note that, in the end, this is very similar to how we interpret humans' brains.
ROWAM20 show that even strong black boxes can be probed through indirect means, for instance by analyzing data from users who interacted with the black box. In particular, a probing algorithm may not need to analyze the outputs
Interpreting the YouTube algorithm
A few papers tried to probe the YouTube recommendation algorithms with probing algorithms. Former Google employee Guillaume Chaslot launched Algotransparency, which creates YouTube accounts run by bots that randomly click on suggested videos. Allgaier16 used similar techniques. Both found worrying pipelines towards conspiracy or climate denialist videos. This was indirectly confirmed by ROWAM20.
Neural network activations
If we have access to the code of a neural network, additional interpretability can be achieved by analyzing the activations of the neural network for different stimuli, as is done through magnetic resonance imaging for humans.