in the development of AI.
In the U.S. and internationally, many organizations aim to encourage trustworthy artificial intelligence systems—iterations of AI that users, developers, and deployers see as accountable, responsible, and unbiased. However, the researchers at TRAILS believe that there is no trust or accountability in AI systems without participation of diverse stakeholders.
TRAILS researchers will work to ensure that future AI systems enhance human capacity, respect human dignity, and protect human rights.
Developing new methods that promote AI trustworthiness.
Empowering users to make sense of AI systems.
Analyzing and promoting inclusive governance strategies to build trust and accountability in AI systems.
Training a multidisciplinary next generation of talent.
Centering voices that have been marginalized in mainstream AI.