2321786
Project Grant
Overview
Grant Description
CISE: Large: Causal Foundations for Decision Making and Learning - Artificial Intelligence (AI) has become ubiquitous in our daily lives, and the importance of decision-making as a scientific challenge has increased dramatically. Decisions that were once made by humans are increasingly delegated to automated systems or made with their assistance.
However, despite substantial recent progress, the current generation of AI technology is lacking in explainability, robustness, and adaptability capabilities, which hinders trust in AI. There is a growing recognition that robust decision-making requires an understanding of the often complex and dynamic causal mechanisms underlying the environment, while most of the current formalisms in AI lack explicit treatment of causal mechanisms.
This project brings together the power of causal modeling and AI decision-making and learning to produce AI systems that rely on less data, can better justify and explain their decisions to people, better react to new circumstances, and consequently are safer and more trustworthy. The project produces new foundations - principles, methods, and tools - for causal decision-making systems. It enriches the traditional AI formalisms with causal ingredients for more efficient, robust, generalizable, and explainable decision-making with the potential to fundamentally transform the AI decision-making field.
The theory will be evaluated through real-world use-cases in robotics and public health. The researchers will make extensive educational efforts and develop training content with a focus on mentorship and broadening the participation of underrepresented groups. The team will engage in knowledge transfer activities including authoring an introductory book on causal decision-making and organizing events to discuss AI and decision-making topics.
This project integrates the framework of structural causal models with the leading approaches for decision-making in AI, including model-based planning with Markov decision processes and their extensions, reinforcement learning, and graphical models such as influence diagrams. The outcomes revolutionize traditional AI decision-making with causal modeling toward developing more efficient, robust, generalizable, and explainable decision-making systems.
In three thrusts, the project develops new foundations (i.e., principles, theory, and algorithms) and provides a common unified framework for causal-empowered decision-making that generalizes the leading decision-making approaches. Thrust 1 studies essential aspects of causal decision-making to guarantee that the decisions of autonomous agents and decision-support systems are robust, sample-efficient, and precise. These goals are realized by developing methods for causality-integrated online and offline policy learning, interventional planning, imitation learning, curriculum learning, knowledge transfer, and adaptation.
Thrust 2 studies additional aspects of causal decision-making that are especially important for decision-support systems where humans are in the loop, including how to exploit causality for constructing explanations, decide when to involve humans, and endow the systems with competence awareness and the ability to make fair decisions that align with the values of their users.
Thrust 3 enhances the scalability of the resulting tools and their ability to reason efficiently, trade-off between both multiple objectives and between explainability and decision quality, and learn a causal model of the world. Together, these thrusts will contribute to a new generation of powerful AI tools for developing autonomous agents and decision-support systems.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria. Subawards are planned for this award.
However, despite substantial recent progress, the current generation of AI technology is lacking in explainability, robustness, and adaptability capabilities, which hinders trust in AI. There is a growing recognition that robust decision-making requires an understanding of the often complex and dynamic causal mechanisms underlying the environment, while most of the current formalisms in AI lack explicit treatment of causal mechanisms.
This project brings together the power of causal modeling and AI decision-making and learning to produce AI systems that rely on less data, can better justify and explain their decisions to people, better react to new circumstances, and consequently are safer and more trustworthy. The project produces new foundations - principles, methods, and tools - for causal decision-making systems. It enriches the traditional AI formalisms with causal ingredients for more efficient, robust, generalizable, and explainable decision-making with the potential to fundamentally transform the AI decision-making field.
The theory will be evaluated through real-world use-cases in robotics and public health. The researchers will make extensive educational efforts and develop training content with a focus on mentorship and broadening the participation of underrepresented groups. The team will engage in knowledge transfer activities including authoring an introductory book on causal decision-making and organizing events to discuss AI and decision-making topics.
This project integrates the framework of structural causal models with the leading approaches for decision-making in AI, including model-based planning with Markov decision processes and their extensions, reinforcement learning, and graphical models such as influence diagrams. The outcomes revolutionize traditional AI decision-making with causal modeling toward developing more efficient, robust, generalizable, and explainable decision-making systems.
In three thrusts, the project develops new foundations (i.e., principles, theory, and algorithms) and provides a common unified framework for causal-empowered decision-making that generalizes the leading decision-making approaches. Thrust 1 studies essential aspects of causal decision-making to guarantee that the decisions of autonomous agents and decision-support systems are robust, sample-efficient, and precise. These goals are realized by developing methods for causality-integrated online and offline policy learning, interventional planning, imitation learning, curriculum learning, knowledge transfer, and adaptation.
Thrust 2 studies additional aspects of causal decision-making that are especially important for decision-support systems where humans are in the loop, including how to exploit causality for constructing explanations, decide when to involve humans, and endow the systems with competence awareness and the ability to make fair decisions that align with the values of their users.
Thrust 3 enhances the scalability of the resulting tools and their ability to reason efficiently, trade-off between both multiple objectives and between explainability and decision quality, and learn a causal model of the world. Together, these thrusts will contribute to a new generation of powerful AI tools for developing autonomous agents and decision-support systems.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria. Subawards are planned for this award.
Funding Goals
THE GOAL OF THIS FUNDING OPPORTUNITY, "COMPUTER AND INFORMATION SCIENCE AND ENGINEERING (CISE): CORE PROGRAMS, LARGE PROJECTS", IS IDENTIFIED IN THE LINK: HTTPS://WWW.NSF.GOV/PUBLICATIONS/PUB_SUMM.JSP?ODS_KEY=NSF23524
Grant Program (CFDA)
Awarding Agency
Funding Agency
Place of Performance
New York,
New York
10027
United States
Geographic Scope
Single Zip Code
Related Opportunity
Analysis Notes
Amendment Since initial award the End Date has been extended from 09/30/27 to 09/30/28 and the total obligations have increased 225% from $1,000,000 to $3,250,000.
The Trustees Of Columbia University In The City Of New York was awarded
Causal Foundations for Trustworthy AI Decision-Making
Project Grant 2321786
worth $3,250,000
from the Division of Information and Intelligent Systems in October 2023 with work to be completed primarily in New York New York United States.
The grant
has a duration of 5 years and
was awarded through assistance program 47.070 Computer and Information Science and Engineering.
The Project Grant was awarded through grant opportunity Computer and Information Science and Engineering (CISE): Core Programs, Large Projects.
Status
(Ongoing)
Last Modified 11/17/25
Period of Performance
10/1/23
Start Date
9/30/28
End Date
Funding Split
$3.2M
Federal Obligation
$0.0
Non-Federal Obligation
$3.2M
Total Obligated
Activity Timeline
Transaction History
Modifications to 2321786
Additional Detail
Award ID FAIN
2321786
SAI Number
None
Award ID URI
SAI EXEMPT
Awardee Classifications
Private Institution Of Higher Education
Awarding Office
490505 DIV OF COMPUTER NETWORK SYSTEMS
Funding Office
490510 CISE INFORMATION TECH RESEARCH
Awardee UEI
F4N1QNPB95M4
Awardee CAGE
1B053
Performance District
NY-13
Senators
Kirsten Gillibrand
Charles Schumer
Charles Schumer
Budget Funding
| Federal Account | Budget Subfunction | Object Class | Total | Percentage |
|---|---|---|---|---|
| Research and Related Activities, National Science Foundation (049-0100) | General science and basic research | Grants, subsidies, and contributions (41.0) | $1,000,000 | 100% |
Modified: 11/17/25