2432702
Project Grant
Overview
Grant Description
SBIR Phase I: RELAI: Enhancing reliability of large language models
The broader/commercial impact of this Small Business Innovation Research (SBIR) Phase I project are significant and multifaceted.
By improving the reliability of advanced artificial intelligence (AI) models such as large language models (LLMs), this project will contribute to safer and more effective AI implementations across various industries, thereby enhancing economic competitiveness and fostering public trust in AI technologies.
This project not only contributes to the foundational understanding of LLM reliability but also provides practical solutions that can be widely adopted in the industry.
Additionally, the enhancements in AI reliability will have far-reaching impacts on sectors like healthcare, finance, and customer service, where accurate and unbiased decision-making is crucial.
For instance, in healthcare, reliable LLMs can lead to better diagnostic tools and personalized treatment plans, ultimately improving patient outcomes and reducing healthcare costs.
In finance, these models can enhance risk assessment and fraud detection, providing more secure and efficient financial services.
The project's integration with academic and industry partners ensures that the developed technologies are not only cutting-edge but also grounded in real-world applicability.
Furthermore, the project includes a strong educational component aimed at training the next generation of AI practitioners in ethical AI practices.
This Small Business Innovation Research (SBIR) Phase I project introduces innovative methodologies to enhance the reliability of large language models (LLMs).
In particular, the project presents methodologies to inspect and mitigate jailbreak issues of LLMs, where adversarial prompts can circumvent their alignment, methodologies to inspect and mitigate LLM hallucinations, where models can generate non-factual responses, and methodologies to inspect and mitigate LLM biases.
In particular, the development of methodologies to inspect and mitigate jailbreak in LLMs represents a significant advancement in adversarial machine learning.
This work not only identifies the vulnerabilities in current LLMs but also proposes robust countermeasures to fortify these models against sophisticated attacks.
Moreover, the methodologies to inspect and mitigate hallucinations in LLMs involve sophisticated analysis of model outputs to identify when and why hallucinations occur, providing deeper insights into the internal workings of LLMs.
Finally, addressing biases in LLMs is a critical component of ensuring ethical and fair artificial intelligence (AI).
By integrating these advanced tools into a comprehensive, user-friendly, and unified platform, this project establishes a new benchmark for the development and deployment of reliable AI applications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Subawards are not planned for this award.
The broader/commercial impact of this Small Business Innovation Research (SBIR) Phase I project are significant and multifaceted.
By improving the reliability of advanced artificial intelligence (AI) models such as large language models (LLMs), this project will contribute to safer and more effective AI implementations across various industries, thereby enhancing economic competitiveness and fostering public trust in AI technologies.
This project not only contributes to the foundational understanding of LLM reliability but also provides practical solutions that can be widely adopted in the industry.
Additionally, the enhancements in AI reliability will have far-reaching impacts on sectors like healthcare, finance, and customer service, where accurate and unbiased decision-making is crucial.
For instance, in healthcare, reliable LLMs can lead to better diagnostic tools and personalized treatment plans, ultimately improving patient outcomes and reducing healthcare costs.
In finance, these models can enhance risk assessment and fraud detection, providing more secure and efficient financial services.
The project's integration with academic and industry partners ensures that the developed technologies are not only cutting-edge but also grounded in real-world applicability.
Furthermore, the project includes a strong educational component aimed at training the next generation of AI practitioners in ethical AI practices.
This Small Business Innovation Research (SBIR) Phase I project introduces innovative methodologies to enhance the reliability of large language models (LLMs).
In particular, the project presents methodologies to inspect and mitigate jailbreak issues of LLMs, where adversarial prompts can circumvent their alignment, methodologies to inspect and mitigate LLM hallucinations, where models can generate non-factual responses, and methodologies to inspect and mitigate LLM biases.
In particular, the development of methodologies to inspect and mitigate jailbreak in LLMs represents a significant advancement in adversarial machine learning.
This work not only identifies the vulnerabilities in current LLMs but also proposes robust countermeasures to fortify these models against sophisticated attacks.
Moreover, the methodologies to inspect and mitigate hallucinations in LLMs involve sophisticated analysis of model outputs to identify when and why hallucinations occur, providing deeper insights into the internal workings of LLMs.
Finally, addressing biases in LLMs is a critical component of ensuring ethical and fair artificial intelligence (AI).
By integrating these advanced tools into a comprehensive, user-friendly, and unified platform, this project establishes a new benchmark for the development and deployment of reliable AI applications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Subawards are not planned for this award.
Awardee
Funding Goals
THE GOAL OF THIS FUNDING OPPORTUNITY, "NSF SMALL BUSINESS INNOVATION RESEARCH (SBIR)/ SMALL BUSINESS TECHNOLOGY TRANSFER (STTR) PROGRAMS PHASE I", IS IDENTIFIED IN THE LINK: HTTPS://WWW.NSF.GOV/PUBLICATIONS/PUB_SUMM.JSP?ODS_KEY=NSF23515
Grant Program (CFDA)
Awarding / Funding Agency
Place of Performance
Bethesda,
Maryland
20817-6624
United States
Geographic Scope
Single Zip Code
Relai was awarded
Project Grant 2432702
worth $275,000
from National Science Foundation in September 2024 with work to be completed primarily in Bethesda Maryland United States.
The grant
has a duration of 1 year and
was awarded through assistance program 47.084 NSF Technology, Innovation, and Partnerships.
The Project Grant was awarded through grant opportunity NSF Small Business Innovation Research / Small Business Technology Transfer Phase I Programs.
SBIR Details
Research Type
SBIR Phase I
Title
SBIR Phase I: RELAI: Enhancing Reliability of Large Language Models
Abstract
The broader/commercial impact of this Small Business Innovation Research (SBIR) Phase I project are significant and multifaceted. By improving the reliability of advanced artificial intelligence (AI) models such as Large Language Models (LLMs), this project will contribute to safer and more effective AI implementations across various industries, thereby enhancing economic competitiveness and fostering public trust in AI technologies. This project not only contributes to the foundational understanding of LLM reliability but also provides practical solutions that can be widely adopted in the industry. Additionally, the enhancements in AI reliability will have far-reaching impacts on sectors like healthcare, finance, and customer service, where accurate and unbiased decision-making is crucial. For instance, in healthcare, reliable LLMs can lead to better diagnostic tools and personalized treatment plans, ultimately improving patient outcomes and reducing healthcare costs. In finance, these models can enhance risk assessment and fraud detection, providing more secure and efficient financial services. The project’s integration with academic and industry partners ensures that the developed technologies are not only cutting-edge but also grounded in real-world applicability. Furthermore, the project includes a strong educational component aimed at training the next generation of AI practitioners in ethical AI practices.
This Small Business Innovation Research (SBIR) Phase I project introduces innovative methodologies to enhance the reliability of large language models (LLMs). In particular, the project presents methodologies to inspect and mitigate jailbreaking issues of LLMs, where adversarial prompts can circumvent their alignment, methodologies to inspect and mitigate LLM hallucinations, where models can generate non-factual responses, and methodologies to inspect and mitigate LLM biases. In particular, the development of methodologies to inspect and mitigate jailbreaking in LLMs represents a significant advancement in adversarial machine learning. This work not only identifies the vulnerabilities in current LLMs but also proposes robust countermeasures to fortify these models against sophisticated attacks. Moreover, the methodologies to inspect and mitigate hallucinations in LLMs involves sophisticated analysis of model outputs to identify when and why hallucinations occur, providing deeper insights into the internal workings of LLMs. Finally, addressing biases in LLMs is a critical component of ensuring ethical and fair artificial intelligence (AI). By integrating these advanced tools into a comprehensive, user-friendly, and unified platform, this project establishes a new benchmark for the development and deployment of reliable AI applications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Topic Code
AI
Solicitation Number
NSF 23-515
Status
(Complete)
Last Modified 9/17/24
Period of Performance
9/1/24
Start Date
8/31/25
End Date
Funding Split
$275.0K
Federal Obligation
$0.0
Non-Federal Obligation
$275.0K
Total Obligated
Activity Timeline
Additional Detail
Award ID FAIN
2432702
SAI Number
None
Award ID URI
SAI EXEMPT
Awardee Classifications
Small Business
Awarding Office
491503 TRANSLATIONAL IMPACTS
Funding Office
491503 TRANSLATIONAL IMPACTS
Awardee UEI
RQY1RHCGQBV7
Awardee CAGE
9XJ17
Performance District
MD-08
Senators
Benjamin Cardin
Chris Van Hollen
Chris Van Hollen
Modified: 9/17/24