Please note that this newsitem has been archived, and may contain outdated information or links.
14 January 2025, Computational Linguistics Seminar, Ana Lucic
Model explainability has become an important problem in artificial intelligence (AI) due to the increased effect that algorithmic predictions have on humans. Explanations can help users understand not only why AI models make certain predictions, but also how these predictions can be changed via counterfactual explanations. Given a data point and a trained model, we want to find the minimal perturbation to the input such that the prediction changes. We frame the problem of finding counterfactual explanations as a gradient-based optimization task and first focus on tree ensembles. We then extend our method to accommodate graph neural networks (GNNs), given the increasing promise of GNNs in real-world applications such as fake news detection and molecular simulation.
Please note that this newsitem has been archived, and may contain outdated information or links.