Data Science Faculty Awarded Fellowships to Advance Ethical AI
The LaCross AI Institute organizes the Fellowships in AI Research (FAIR) program, which supports scholars, practitioners, and UVA students whose research relates to ethical AI. The Institute awarded the 2026 fellowships to eight University of Virginia faculty members, including David Danks, Nur Yildirim, and Tianhao Wang of the School of Data Science. This year's fellows bring expertise across business, data science, health, commerce, ethics, and more.
Examining AI Agency and Managerial Moral Responsibility
David Danks, distinguished university professor of philosophy, AI, and data science, is involved in research related to philosophy, cognitive science, and machine learning. He and his research team received the fellowship for their project, “AI and Managerial Moral Responsibility.” Their project explores the nature of AI agency, especially relating to responsibility in a corporate setting.
“AI systems are increasingly managing people, not just being managed by us,” Danks said. “This fellowship will bring together faculty from across Grounds to dive deeper into how to ensure these interactions ultimately help people, rather than harm them.” Danks and his team will be asking the following questions:
- When should a human manager be held responsible for an AI system’s actions?
- When should an AI be “blamed” (and what would that involve)?
- How should changes in AI-enabled capabilities in the workplace change our moral expectations and norms about human behavior?
Designing Radiology AI to Strengthen Human-AI Collaboration
Nur Yildirim, an assistant professor of data science, is interested in human-centered, participatory AI that benefits society. She and her team were selected for the FAIR program for their proposal, “Designing Radiology AI for Enhanced Human & AI Performance,” which aims to design, test, and evaluate AI-assisted radiology reporting systems to reliably enhance performance.
In the project’s initial phase, Yildirim and her team will fine-tune a multimodal foundation model on a UVA Health dataset of X-rays. The second phase consists of testing how visual grounding and workflow timing affect accuracy, error detection, and reliance calibration. In the third phase, she and her team will synthesize findings into validated metrics for human-AI collaboration quality.
Addressing LLM-Enabled Abuse in Digital Advertising
Tianhao Wang is an assistant professor of data science by courtesy and a data privacy and security researcher. Wang and his team were selected for their proposal, “LLM-Enabled Abuse in Digital Advertising,” which examines how large language models may enable new forms of abuse in digital advertising, such as automated scam generation, deceptive targeting, or large-scale manipulation. "We aim to design measurement tools and mitigation strategies that help platforms and regulators identify harmful uses early, while supporting responsible innovation in AI-driven advertising systems," Wang said.
The LaCross AI Institute provides up to $100,000 in funding for each fellowship. Launched in 2024 as part of the Darden School of Business, the Institute partners with the School of Data Science with a focus on ethical leadership, interdisciplinary collaboration, and global impact.



