

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# 资源


**参考**

1. Adadi, Amina and Mohammed Berrada. 2018 年。“Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).” *IEEE Access 6*: 52138–52160.

1. Ancona, Marco, Enea Ceolini, Cengiz Oztireli 和 Markus Gross. 2018 年。“Towards better understanding of gradient-based attribution methods for Deep Neural Networks.” *Proceedings of the International Conference on Learning Representations (ICLR)*. [arXiv:1711.06104](https://arxiv.org/pdf/1711.06104.pdf).

1. Dhamdhere, Kedar, Mukund Sundararajan, and Qiqi Yan. 2018 年。“How Important Is a Neuron?” *Proceedings of the Thirty-sixth International Conference on Machine Learning (ICML)*. [arXiv:1805.12233](https://arxiv.org/pdf/1805.12233.pdf).

1. Dua, Dheeru and Casey Graff. 2019 年。UCI Machine Learning Repository [[ http://archive.ics.uci.edu/ml ](http://archive.ics.uci.edu/ml)]. Irvine, CA: University of California, School of Information and Computer Science.

1. Kapishnikov, Andrei, Tolga Bolukbasi, Fernanda Viegas, and Michael Terry. 2019 年。“XRAI: Better Attributions Through Regions.” *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*: 4948–4957. [arXiv:1906.02825](https://arxiv.org/pdf/1906.02825.pdf).

1. Kim, Been, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018 年。“Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV).” [arXiv:1711.11279](https://arxiv.org/pdf/1711.11279.pdf).

1. Lundberg, Scott M., Gabriel G. Erion, and Su-In Lee. 2019 年。“Consistent Individualized Feature Attribution for Tree Ensembles.” [arXiv:1802.03888](https://arxiv.org/pdf/1802.03888.pdf).

1. Lundberg, Scott M. and Su-In Lee. 2017 年。“A Unified Approach to Interpreting Model Predictions”. *Advances in Neural Information Processing Systems (NIPS) 30*. [arXiv:1705.07874](https://arxiv.org/pdf/1705.07874.pdf).

1. Rajpurkar, Pranav, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016 年。“SQuAD: 100,000\$1 Questions for Machine Comprehension of Text.” [arXiv:1606.05250](https://arxiv.org/pdf/1606.05250.pdf).

1. Ribeiro, Marco T., Sameer Singh, and Carlos Guestrin. 2016 年。"’Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” *KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*: 1135–1144. [arXiv:1602.04938](https://arxiv.org/abs/1602.04938).

1. Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. 2017 年。“Axiomatic Attribution for Deep Networks.” *Proceedings of the 34th International Conference on Machine Learning 70*: 3319–3328. [arXiv:1703.01365](https://arxiv.org/pdf/1703.01365.pdf).

**External software packages**
+ [SHAP：https://github.com/slundberg/shap](https://github.com/slundberg/shap)
+ [Captum：https://captum.ai/](https://captum.ai/)

**Additional reading**
+ [Amazon SageMaker AI Clarify Model Explainability](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-explainability.html)（SageMaker AI 文档）
+ [Amazon SageMaker AI Clarify 存储库](https://github.com/aws/amazon-sagemaker-clarify)（GitHub）
+ Molnar, Christoph. [Interpretable machine learning. A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/), 2,019