Sheng Li delivers plenary address on causality, trustworthy AI at ICSTA 2025
Sheng Li, Quantitative Foundation Associate Professor of Data Science, recently delivered a plenary address at the 7th International Conference on Statistics: Theory and Applications (ICSTA 2025). His presentation, “Causality for Trustworthy Artificial Intelligence,” highlighted his research at the intersection of causal inference and artificial intelligence security.
AI systems have reshaped industries and research fields alike, but their vulnerability to backdoor attacks presents serious challenges for reliability and trustworthiness. In his presentation, Li introduced research he and his team have published on new causal analysis methods to analyze prediction behaviors that help AI systems resist these threats. He explained how these novel techniques can identify when data has been maliciously altered and even train cleaner models from compromised datasets — important steps toward ensuring AI systems remain trustworthy in real-world applications.
These innovations demonstrate how causal reasoning not only advances the theoretical understanding of AI but also strengthens its trustworthiness in practice. Li’s plenary underscored the growing importance of causal methods in developing safe, robust, and ethical AI systems.
Li's research, including the 2023 book Machine Learning for Causal Inference, focuses on trustworthy AI, causal inference, large foundation models, and vision-language modeling.
