2505865
Cooperative Agreement
Overview
Grant Description
Institute for Foundations of Machine Learning
The primary goal of this project is the development of broadly applicable foundational tools and new mathematical theories to advance the state of the art in generative artificial intelligence (AI).
Although AI systems are now pervasive across disparate domains, core algorithmic challenges for building and deploying large models remain.
It is critical that training algorithms make the most of available computational resources and that resulting models are accurate, robust, and interpretable during inference.
Data sets must be curated and network architectures tuned depending on the modality of the task at hand.
This research will focus on new frameworks for formally modeling these problems in order to create efficient solutions.
In addition, this project will help thousands of students and working professionals acquire AI expertise through a large-scale online masters initiative and through activities targeting high-school students.
The project's technical research is divided into four foundational thrusts.
The first, algorithms and optimization for generative models, focuses on better training and inference for large models and looks beyond first-order methods.
The second, a mathematical theory of foundation models, aims to understand how to specialize foundation models for new domains using as little additional data and compute as possible.
The third thrust is on diffusion, now a cornerstone of generative AI, and studies how to learn distributions without memorization and solve associated inverse problems.
The last thrust looks at improving the robustness and safety of generative models through the lens of distribution shift.
All of these thrusts are coupled with use-inspired projects in medical imaging, generative biology, and AI for mathematical theorem-proving.
A particular emphasis is on open-sourcing AI in order to provide transparent models for use in multiple domains.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Subawards are planned for this award.
The primary goal of this project is the development of broadly applicable foundational tools and new mathematical theories to advance the state of the art in generative artificial intelligence (AI).
Although AI systems are now pervasive across disparate domains, core algorithmic challenges for building and deploying large models remain.
It is critical that training algorithms make the most of available computational resources and that resulting models are accurate, robust, and interpretable during inference.
Data sets must be curated and network architectures tuned depending on the modality of the task at hand.
This research will focus on new frameworks for formally modeling these problems in order to create efficient solutions.
In addition, this project will help thousands of students and working professionals acquire AI expertise through a large-scale online masters initiative and through activities targeting high-school students.
The project's technical research is divided into four foundational thrusts.
The first, algorithms and optimization for generative models, focuses on better training and inference for large models and looks beyond first-order methods.
The second, a mathematical theory of foundation models, aims to understand how to specialize foundation models for new domains using as little additional data and compute as possible.
The third thrust is on diffusion, now a cornerstone of generative AI, and studies how to learn distributions without memorization and solve associated inverse problems.
The last thrust looks at improving the robustness and safety of generative models through the lens of distribution shift.
All of these thrusts are coupled with use-inspired projects in medical imaging, generative biology, and AI for mathematical theorem-proving.
A particular emphasis is on open-sourcing AI in order to provide transparent models for use in multiple domains.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Subawards are planned for this award.
Awardee
Funding Goals
NOT APPLICABLE
Grant Program (CFDA)
Awarding Agency
Funding Agency
Place of Performance
Austin,
Texas
78712-1139
United States
Geographic Scope
Single Zip Code
Related Opportunity
NOT APPLICABLE
Analysis Notes
Amendment Since initial award the total obligations have increased 86% from $3,500,000 to $6,500,000.
University Of Texas At Austin was awarded
Foundational Tools for Generative AI Advancement
Cooperative Agreement 2505865
worth $6,500,000
from the Division of Information and Intelligent Systems in October 2025 with work to be completed primarily in Austin Texas United States.
The grant
has a duration of 5 years and
was awarded through assistance program 47.070 Computer and Information Science and Engineering.
Status
(Ongoing)
Last Modified 9/10/25
Period of Performance
10/1/25
Start Date
9/30/30
End Date
Funding Split
$6.5M
Federal Obligation
$0.0
Non-Federal Obligation
$6.5M
Total Obligated
Activity Timeline
Transaction History
Modifications to 2505865
Additional Detail
Award ID FAIN
2505865
SAI Number
None
Award ID URI
SAI EXEMPT
Awardee Classifications
Public/State Controlled Institution Of Higher Education
Awarding Office
490501 DIV OF COMPUTER COMM FOUNDATIONS
Funding Office
490510 CISE INFORMATION TECH RESEARCH
Awardee UEI
V6AFQPN18437
Awardee CAGE
9B981
Performance District
TX-25
Senators
John Cornyn
Ted Cruz
Ted Cruz
Modified: 9/10/25