Explainability fact sheets: a framework for systematic assessment of explainable approaches K Sokol, P Flach Proceedings of the 2020 Conference on Fairness, Accountability, and …, 2020 | 141 | 2020 |
FACE: Feasible and actionable counterfactual explanations R Poyiadzi, K Sokol, R Santos-Rodriguez, T De Bie, P Flach Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 344-350, 2020 | 129 | 2020 |
One Explanation Does Not Fit All K Sokol, P Flach KI-Künstliche Intelligenz, 1-16, 2020 | 53 | 2020 |
Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant. K Sokol, PA Flach IJCAI, 5868-5870, 2018 | 41 | 2018 |
Counterfactual Explanations of Machine Learning Predictions: Opportunities and Challenges for AI Safety K Sokol, PA Flach SafeAI 2019: AAAI Workshop on Artificial Intelligence Safety 2301 (urn:nbn …, 2019 | 34 | 2019 |
Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements. K Sokol, PA Flach IJCAI, 5785-5786, 2018 | 26 | 2018 |
bLIMEy: Surrogate Prediction Explanations Beyond LIME K Sokol, A Hepburn, R Santos-Rodriguez, P Flach 2019 Workshop on Human-Centric Machine Learning (HCML 2019) at the 33rd …, 2019 | 19* | 2019 |
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency K Sokol, R Santos-Rodriguez, P Flach arXiv preprint arXiv:1909.05167, 2019 | 16 | 2019 |
FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems K Sokol, A Hepburn, R Poyiadzi, M Clifford, R Santos-Rodriguez, P Flach Journal of Open Source Software 5 (49), 1904, 2020 | 15 | 2020 |
Releasing eHealth analytics into the wild: Lessons learnt from the SPHERE project T Diethe, M Holmes, M Kull, M Perello Nieto, K Sokol, H Song, E Tonkin, ... Proceedings of the 24th ACM SIGKDD International Conference on Knowledge …, 2018 | 15 | 2018 |
LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees K Sokol, P Flach arXiv preprint arXiv:2005.01427, 2020 | 12 | 2020 |
Desiderata for Interpretability: Explaining Decision Tree Predictions with Counterfactuals K Sokol, P Flach Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 10035 …, 2019 | 12 | 2019 |
Fairness, Accountability and Transparency in Artificial Intelligence: A Case Study of Logical Predictive Models K Sokol Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 541-542, 2019 | 2 | 2019 |
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence K Sokol, P Flach arXiv preprint arXiv:2112.14466, 2021 | 1 | 2021 |
Towards Faithful and Meaningful Interpretable Representations K Sokol, P Flach arXiv preprint arXiv:2008.07007, 2020 | 1 | 2020 |
The Role of Textualisation and Argumentation in Understanding the Machine Learning Process. K Sokol, PA Flach IJCAI, 5211-5212, 2017 | 1 | 2017 |
Ethical and Fairness Implications of Model Multiplicity K Sokol, M Kull, J Chan, FD Salim arXiv preprint arXiv:2203.07139, 2022 | | 2022 |
You Only Write Thrice: Creating Documents, Computational Notebooks and Presentations From a Single Source K Sokol, P Flach Beyond static papers: Rethinking how we share scientific understanding in ML …, 2021 | | 2021 |
Towards intelligible and robust surrogate explainers: a decision tree perspective K Sokol University of Bristol, 2021 | | 2021 |
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components K Sokol, A Hepburn, R Santos-Rodriguez, P Flach https://zenodo.org/record/4035128, 2020 | | 2020 |