
DENTON (UNT), Texas — As companies continue to leverage artificial intelligence and look to implement it in things like self-driving cars, healthcare, and virtual assistants, a researcher at UNT wants to make sure that AI is accountable.
Supreeth Shastri, an assistant professor in the Department of Computer Science and Engineering at UNT, is teaming up with a law scholar to lay the groundwork for a benchmark that companies building AI can use to evaluate and demonstrate their accountability.
Shastri will collaborate with Mihailis Diamantis, a law professor at the University of Iowa, on a National Science Foundation EAGER grant, early-concept grants for exploratory research. The grant funds early-stage projects that test bold research ideas with the potential for major impact.
For this project, they were inspired to create an AI accountability benchmark centered on American ethos and practices after the passage of the European AI Act. “Europe is approaching AI regulation by assuming companies are held responsible unless they can prove otherwise,” Shastri said. “Such a framework not only makes it difficult to innovate in AI, but also doesn’t align with how U.S. companies or laws typically operate.”
Shastri and Diamantis will focus on three areas for their framework: self-driving vehicles, healthcare, and cybersecurity.
Research on self-driving vehicles and cybersecurity will be led by Shastri, who will draw on UNT’s research infrastructure including Center for Integrated and Intelligent Mobility Systems (CIIMS) and Center for Information and Cybersecurity (CICS).
“We can bring in faculty from electrical engineering, mechanical engineering and computer science to really make this a truly interdisciplinary project,” Shastri said.
Diamantis will focus on the healthcare aspect and collaborate with colleagues at the University of Iowa Health Care.
Their framework focuses on reasonableness, similar to civil lawsuits. If an AI is involved in an issue, judges would decide whether it acted in a “reasonable” way. If it did, neither the AI nor its creator company would be held responsible. To set this standard, they defined an AI Negligence Standard: an AI is considered reasonable if it poses a similar or lower risk than the average human or AI performing the same task.
“When you design a benchmark, companies have something to aim for, but they can do it in their own way and still innovate,” Shastri said.
As they develop their prototype framework, Shastri and Diamantis strive to instill accountability in AI development as a core requirement than an afterthought. By bridging the gap between technology and law, their work could help shape how AI systems are judged and regulated in the U.S., ensuring that accountability and innovation can go hand in hand.
“I think the most exciting aspect is that we’re building something engineers, lawyers and judges can all understand,” Shastri said. “It connects the two worlds and empowers people on both sides.”