Towards Deep Interpretable Predictions for Multi-Scope Temporal Events

Project Summary

Many human events, such as personal visits to hospitals, flu outbreaks, or protests, are recorded in temporal sequences and exhibit recurring patterns. For instance, in hospital admission records, patients who have been diagnosed with hypertension often later visit the hospital for heart diseases. Predictions of human events using past event patterns are key to many stakeholders in AI-assisted decision making. Interpretable predictive models will significantly improve transparency in these decision-making processes. Recently, interpretable machine learning has been drawing an increasing amount of attention. However, most state-of-the-art works in this domain focus on static analysis such as identifying pixels for object detection in an image. Little work has been developed for temporal event prediction in dynamic, heterogeneous, and multi-source data sequences. To address this problem, this project will support the design of transformative interpretable paradigms for temporal event sequences of different scopes with heterogeneous and multi-source features. Providing predictive tools that can capture hierarchical, relational, and complex evidence will enrich and support robust forecasting in the future. This work will involve educational activities such as developing new courses on interpretable machine learning; training graduate, undergraduate, and high-school students in interdisciplinary studies; and increasing participation of women and minority groups in academic research. Core outcomes of this project such as software, datasets, and publications will be made available to the general public.

This project will create a new set of interpretable mechanisms that provide dynamic, heterogeneous, and multi-source explanations in temporal event prediction. Although a variety of explainable approaches have been developed in many traditional machine learning tasks, several unique challenges remain unexplored: (1) Regulating attention-based models for auditing a model is an urgent need given the wide adoption of attention mechanisms in deep learning. (2) Most current approaches focus on selecting important input features based on correlations which often lack causal evidence. (3) Reciprocal relations and dependencies among heterogeneous data sources are largely ignored in current research. This project will address these challenges in the following ways: (i) It will investigate new collaborative attention regulation strategies by using domain knowledge for calibration. (ii) It will integrate dynamic causal discovery into temporal event prediction with hidden confounder representation learning. (iii) It will provide multi-faceted explanations by distilling semantic knowledge from unstructured text and incorporating this knowledge in a co-learning framework with multi-source temporal data. The specific research aims will be complemented by an extensive set of evaluation plans including standard retrospective evaluation on multi-scope real-world event records as well as multiple user studies to evaluate the interpretability of developed models. The project outcomes including observational data, interpretable prediction tools, and open-source software for stakeholders will be shared with the computer science research community and other practitioners in healthcare, political science, and epidemiology.


Award Information

This website is based upon work supported by the National Science Foundation under Grant No. 2047843. Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Award number: NSF IIS 2047843


Principal Investigator

Dr. Yue Ning

Students

Xiaoxue Han, PhD Student.