2229885
Cooperative Agreement
Overview
Grant Description
Institute for Trustworthy AI in Law and Society (TRAILS)
Artificial Intelligence (AI) systems have the potential to enhance human capacity and increase productivity. They also have the ability to catalyze innovation and mitigate complex problems. However, current AI systems are not created in a way that is transparent, making them a challenge to public trust. The opaque processes used to produce results that are not well understood further undermine trust. Additionally, AI systems can cause harm, particularly to communities that are excluded from participating in AI system developments. This lack of trustworthiness will result in slower adoption of these AI technologies.
It is critical to AI innovation to include groups affected by the benefits and harms of these AI systems. The TRAILS (Trustworthy AI in Law and Society) Institute is a partnership of the University of Maryland, the George Washington University, Morgan State University, and Cornell University. The institute encourages community participation in the development of AI techniques, tools, and scientific theories. The design and policy recommendations produced by TRAILS will promote the trustworthiness of AI systems.
A first goal of the TRAILS Institute is to discover ways to change the design and development of AI systems. This will help communities make informed choices about AI technology adoption. A second goal is the development of best practices for industry and government, fostering AI innovation while keeping communities safe, engaged, and informed.
The TRAILS Institute has explicit plans for increasing the participation of affected communities, including K-12 education and congressional staff. These plans aim to elicit concerns and expectations from the affected communities and provide an improved understanding of the risks and benefits of AI-enabled systems.
The TRAILS Institute's research program identifies four key thrusts, targeting key aspects of the AI system development lifecycle. The first thrust is social values, involving increasing participation throughout all aspects of AI development to ensure that the values produced by AI systems reflect community and interested parties' values. This includes participatory design with diverse communities, resulting in community-based interventions and adaptations for the AI development lifecycle.
The second thrust is technical design, which includes the development of algorithms to promote transparency and trust in AI. This also involves the development of tools that increase the robustness of AI systems and promote user and developer understanding of how AI systems operate.
The third thrust is socio-technical perceptions, which involves the development of novel measures, including psychometric techniques and experimental paradigms, to assess the interpretability and explainability of AI systems. This will enable a deeper understanding and perception of existing metrics and algorithms, providing insight into the values perceived and held by included community members.
The fourth thrust is governance, which includes documentation and analysis of governance regimes for both data and technologies. These provide the underpinning AI for the development of platform and technology regulation. Ethnographers will analyze the institute itself and partner organizations to document ways in which technical choices translate to governance impacts.
The research focus of the TRAILS Institute is in two use-inspired areas: information dissemination systems (e.g., social media platforms) and energy-intensive systems (e.g., autonomous systems).
The institute's education and workforce development efforts in AI include new educational offerings catering to various markets, ranging from secondary education to executive education. The TRAILS Institute is especially focused on expanding access to foundational education for historically marginalized and minoritized groups of learners and users. The institute will work with these communities to learn from, educate, and recruit participants, with a focus on retaining, supporting, and empowering those marginalized in mainstream AI. The integration of these communities into the AI research program broadens participation in AI development and governance.
The National Institute of Standards and Technology (NIST) is partnering with NSF to provide funding for this institute. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Artificial Intelligence (AI) systems have the potential to enhance human capacity and increase productivity. They also have the ability to catalyze innovation and mitigate complex problems. However, current AI systems are not created in a way that is transparent, making them a challenge to public trust. The opaque processes used to produce results that are not well understood further undermine trust. Additionally, AI systems can cause harm, particularly to communities that are excluded from participating in AI system developments. This lack of trustworthiness will result in slower adoption of these AI technologies.
It is critical to AI innovation to include groups affected by the benefits and harms of these AI systems. The TRAILS (Trustworthy AI in Law and Society) Institute is a partnership of the University of Maryland, the George Washington University, Morgan State University, and Cornell University. The institute encourages community participation in the development of AI techniques, tools, and scientific theories. The design and policy recommendations produced by TRAILS will promote the trustworthiness of AI systems.
A first goal of the TRAILS Institute is to discover ways to change the design and development of AI systems. This will help communities make informed choices about AI technology adoption. A second goal is the development of best practices for industry and government, fostering AI innovation while keeping communities safe, engaged, and informed.
The TRAILS Institute has explicit plans for increasing the participation of affected communities, including K-12 education and congressional staff. These plans aim to elicit concerns and expectations from the affected communities and provide an improved understanding of the risks and benefits of AI-enabled systems.
The TRAILS Institute's research program identifies four key thrusts, targeting key aspects of the AI system development lifecycle. The first thrust is social values, involving increasing participation throughout all aspects of AI development to ensure that the values produced by AI systems reflect community and interested parties' values. This includes participatory design with diverse communities, resulting in community-based interventions and adaptations for the AI development lifecycle.
The second thrust is technical design, which includes the development of algorithms to promote transparency and trust in AI. This also involves the development of tools that increase the robustness of AI systems and promote user and developer understanding of how AI systems operate.
The third thrust is socio-technical perceptions, which involves the development of novel measures, including psychometric techniques and experimental paradigms, to assess the interpretability and explainability of AI systems. This will enable a deeper understanding and perception of existing metrics and algorithms, providing insight into the values perceived and held by included community members.
The fourth thrust is governance, which includes documentation and analysis of governance regimes for both data and technologies. These provide the underpinning AI for the development of platform and technology regulation. Ethnographers will analyze the institute itself and partner organizations to document ways in which technical choices translate to governance impacts.
The research focus of the TRAILS Institute is in two use-inspired areas: information dissemination systems (e.g., social media platforms) and energy-intensive systems (e.g., autonomous systems).
The institute's education and workforce development efforts in AI include new educational offerings catering to various markets, ranging from secondary education to executive education. The TRAILS Institute is especially focused on expanding access to foundational education for historically marginalized and minoritized groups of learners and users. The institute will work with these communities to learn from, educate, and recruit participants, with a focus on retaining, supporting, and empowering those marginalized in mainstream AI. The integration of these communities into the AI research program broadens participation in AI development and governance.
The National Institute of Standards and Technology (NIST) is partnering with NSF to provide funding for this institute. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Funding Goals
THE GOAL OF THIS FUNDING OPPORTUNITY, "NATIONAL ARTIFICIAL INTELLIGENCE (AI) RESEARCH INSTITUTES", IS IDENTIFIED IN THE LINK: HTTPS://WWW.NSF.GOV/PUBLICATIONS/PUB_SUMM.JSP?ODS_KEY=NSF22502
Grant Program (CFDA)
Awarding / Funding Agency
Place of Performance
College Park,
Maryland
20742-5100
United States
Geographic Scope
Single Zip Code
Related Opportunity
Analysis Notes
Amendment Since initial award the total obligations have increased 664% from $2,250,000 to $17,190,049.
College Park University Of Maryland was awarded
TRAILS: Trustworthy AI in Law & Society
Cooperative Agreement 2229885
worth $17,190,049
from the Division of Information and Intelligent Systems in June 2023 with work to be completed primarily in College Park Maryland United States.
The grant
has a duration of 5 years and
was awarded through assistance program 47.070 Computer and Information Science and Engineering.
The Cooperative Agreement was awarded through grant opportunity National Artificial Intelligence (AI) Research Institutes.
Status
(Ongoing)
Last Modified 9/10/25
Period of Performance
6/1/23
Start Date
5/31/28
End Date
Funding Split
$17.2M
Federal Obligation
$0.0
Non-Federal Obligation
$17.2M
Total Obligated
Activity Timeline
Subgrant Awards
Disclosed subgrants for 2229885
Transaction History
Modifications to 2229885
Additional Detail
Award ID FAIN
2229885
SAI Number
None
Award ID URI
SAI EXEMPT
Awardee Classifications
Public/State Controlled Institution Of Higher Education
Awarding Office
490502 DIV OF INFOR INTELLIGENT SYSTEMS
Funding Office
490510 CISE INFORMATION TECH RESEARCH
Awardee UEI
NPU8ULVAAS23
Awardee CAGE
0UB92
Performance District
MD-04
Senators
Benjamin Cardin
Chris Van Hollen
Chris Van Hollen
Budget Funding
| Federal Account | Budget Subfunction | Object Class | Total | Percentage |
|---|---|---|---|---|
| Research and Related Activities, National Science Foundation (049-0100) | General science and basic research | Grants, subsidies, and contributions (41.0) | $7,626,273 | 100% |
Modified: 9/10/25