Artificial intelligence has the potential to benefit society in myriad ways from health and the economy to education and public safety. But it can be just as harmful, especially in areas such as hiring and criminal justice where the cost of making the wrong decision can be immense.
That is one of the messages that Aron Culotta, an associate professor of computer science at Tulane University School of Science and Engineering, is sharing with community organizations through the Tulane Center for Community-Engaged Artificial Intelligence.
The center, which had its kickoff meeting in April, brings together a diverse group of scholars, including economists, sociologists and public health experts, to investigate the role of artificial intelligence in society.
“Across the country, there is a growing tech backlash concerned that AI may exacerbate existing disparities, widen the digital divide or otherwise result in a less just society."
Nicholas Mattei, assistant professor of computer science at Tulane
It is part of the Vice President for Research’s program to create Centers of Excellence that mobilize investigators from different fields of study across the university to focus on complex research challenges.
The thrust of the AI center will be to identify common issues in AI that span domains and to develop novel human-centered, community-engaged approaches with broad applications. For its first event, the center invited community stakeholders to a discussion titled “Artificial Intelligence: Risks and Benefits for Local Communities.”
“This was an opportunity for academics and community members to discuss emerging issues in artificial intelligence and the potential benefits and harms it may have on our society and local communities,” said Culotta, the center’s director.
Among the nonprofit groups represented were CourtWatch NOLA, Thrive NOLA, Guardians Institute, UNOLA (formerly the Mardi Gras Indian Hall of Fame), the National Coalition of 100 Black Women, Women with a Vision and The Data Center. From Tulane, experts in computer science, public health, sociology, Africana studies and economics were in attendance.
“In addition to networking, we brainstormed ways AI can be helpful and/or harmful in our communities,” Culotta said. “Our next steps are to summarize the results, share back with participants and schedule one-on-one follow-ups to identify partners for projects to begin in the summer or fall.”
Nicholas Mattei, assistant professor of computer science at Tulane and an expert in AI ethics, said that despite some of the benefits of AI, there is a pervasive mistrust of AI technology by the public, and for good reason.
“Across the country, there is a growing tech backlash concerned that AI may exacerbate existing disparities, widen the digital divide or otherwise result in a less just society,” he said. “Often these concerns grow out of a sense that these technologies are applied to people and communities, without their input.”
He used the example of recidivism software that was analyzed in 2016 by ProPublica. The software attempted to predict whether criminals would reoffend once they were released on bail.
“It was much more likely to predict that White defendants would not reoffend, so they were released more frequently and at lower bail prices,” Mattei said. “The upshot was that more Black people were being held on high bail or not released on bail at all.”
Tulane sociologist Andrea Boyles, a race and gender scholar and associate professor in the School of Liberal Arts, said the software illustrates how AI can lead to the expansion of racialized surveillance, stigmatization and criminalization, often unbeknown by and to the detriment of Black people and other marginalized groups. As part of the center’s team of experts, she plans to work with such groups “to further understand and counter everyday harms that may be exacerbated through computer technology.”
Other faculty members working with the center include Alessandra Bazzano, who conducts research in maternal and child health as an associate professor in the School of Public Health and Tropical Medicine, and Patrick Button, a Tulane economist in the School of Liberal Arts with expertise in discrimination, particularly in employment and mortgage access.
Bazzano is especially interested in youth mental health and behavior change and plans to explore the development of AI-enabled parenting resources. “With input from community members, we’ll be able to build tools that meet people’s needs and are designed for them from the ground up.”
Button, too, is working with the center to create AI systems that are inclusive, effective, fair and transparent. “I am using AI tools, such as natural language processing, to study ‘subtle’ discrimination in markets,” Button said. “For example, do mortgage loan officers use less helpful, enthusiastic or polite language when working with same-sex couples or Black prospective borrowers?”
Culotta said the center’s community partners expressed numerous concerns. For instance, they don’t want biased algorithms to lead to discriminatory behaviors nor do they want AI systems developed without knowledge of the communities in which they are used.
“We can work with local organizations to audit existing AI tools and develop ways to monitor them over time,” Culotta said. “And we can build pilot AI projects that are driven by community members starting with the design phase through deployment. Our hope is to discover reproducible processes that can serve as best practices for other applications.”
In addition to the AI center, there are four other Centers of Excellence created as part of the Vice President of Research’s initiative. They are the Tulane Center of Excellence in Sex-Based Biology and Medicine, the Tulane Personalized Health Institute, the Center of Excellence for Emerging and Re-emerging Infectious Disease Research and the Institute for Integrated Data and Health Sciences.