2436881
Project Grant
Overview
Grant Description
Collaborative research: HCC: Small: Modeling learning engagement using AI and gaze data to improve virtual reality immersive training.
A strong U.S. workforce is vital to maintain a standard of excellence and global leadership in science, technology, engineering and mathematics (STEM).
However, the pace in which Americans are entering the STEM workforce is too slow to meet the pressing needs of industries like healthcare, military, manufacturing, and construction.
New training methods are needed to improve how learners acquire STEM knowledge and skills to enter the U.S. STEM workforce.
Immersive virtual reality (VR) as a training platform is a solution that can provide individual training and education for all Americans.
VR experiences with high engagement are particularly effective as mechanisms for learning new and complex ideas.
However, without real-time instructor feedback, many learners struggle to stay engaged.
This project explores how eye tracking and artificial intelligence (AI) can be used to measure and guide learner engagement in VR training.
By automatically detecting when learners are focused or distracted, the system can provide visual cues to support attention and understanding.
The outcomes of this work will help more Americans gain the knowledge and skills needed for high-demand STEM careers.
Also, novel opportunities will be opened for improving training approaches for people with attention or cognitive challenges.
Overall, the knowledge generated from this project will enhance STEM training outcomes when using VR.
This work will empower more Americans to advance our national welfare, prosperity and security.
This project investigates how gaze-based models can be used for estimating engagement and reactively guiding user attention within immersive VR training experiences.
Specifically, VR training will take place within construction safety training, as this domain has a rapidly growing workforce similar to other STEM disciplines.
Currently, the speed at which construction workers and professionals gain knowledge and skills needed to complete job tasks safely remains inadequate.
To perform construction work safely, trainees are required to be highly engaged during short, demanding, complex training interventions to reduce fatalities and injuries in the workplace.
The project is structured around three thrusts in the following areas: (1) data collection and modeling approaches, (2) reactive designs to support engagement; and (3) validation with construction professionals.
First, the project will develop real-time and post annotation methods to label engagement states.
This process will use gaze, engagement, and training performance data from VR training sessions.
These data will be used to reduce labeling bias and improve model accuracy.
Then, machine learning techniques (e.g., deep-learning, spike, liquid networks) will be used to identify key fixations, saccades, and pupillary features as they relate to engagement and performance for building real-time engagement models.
Various time scales, features, and models will be explored to produce optimal real-time estimations of learner engagement.
Second, the research team will investigate how different activation functions can be used to trigger visual attention cues in response to user engagement levels (low, medium, high).
Human-centered interventions will assess how these functions influence learner ability to initiate, sustain, or regain engagement.
Third, the project will study the produced models using VR construction safety training sessions with industry professionals.
The effectiveness and transferability of real-time attention guidance will be evaluated to improve VR trainings.
Ultimately, these thrusts will advance our scientific understanding of gaze-based engagement modeling to learn construction safety materials, use AI to guide visual attention, and improve VR tools for workforce training outcomes.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Subawards are not planned for this award.
A strong U.S. workforce is vital to maintain a standard of excellence and global leadership in science, technology, engineering and mathematics (STEM).
However, the pace in which Americans are entering the STEM workforce is too slow to meet the pressing needs of industries like healthcare, military, manufacturing, and construction.
New training methods are needed to improve how learners acquire STEM knowledge and skills to enter the U.S. STEM workforce.
Immersive virtual reality (VR) as a training platform is a solution that can provide individual training and education for all Americans.
VR experiences with high engagement are particularly effective as mechanisms for learning new and complex ideas.
However, without real-time instructor feedback, many learners struggle to stay engaged.
This project explores how eye tracking and artificial intelligence (AI) can be used to measure and guide learner engagement in VR training.
By automatically detecting when learners are focused or distracted, the system can provide visual cues to support attention and understanding.
The outcomes of this work will help more Americans gain the knowledge and skills needed for high-demand STEM careers.
Also, novel opportunities will be opened for improving training approaches for people with attention or cognitive challenges.
Overall, the knowledge generated from this project will enhance STEM training outcomes when using VR.
This work will empower more Americans to advance our national welfare, prosperity and security.
This project investigates how gaze-based models can be used for estimating engagement and reactively guiding user attention within immersive VR training experiences.
Specifically, VR training will take place within construction safety training, as this domain has a rapidly growing workforce similar to other STEM disciplines.
Currently, the speed at which construction workers and professionals gain knowledge and skills needed to complete job tasks safely remains inadequate.
To perform construction work safely, trainees are required to be highly engaged during short, demanding, complex training interventions to reduce fatalities and injuries in the workplace.
The project is structured around three thrusts in the following areas: (1) data collection and modeling approaches, (2) reactive designs to support engagement; and (3) validation with construction professionals.
First, the project will develop real-time and post annotation methods to label engagement states.
This process will use gaze, engagement, and training performance data from VR training sessions.
These data will be used to reduce labeling bias and improve model accuracy.
Then, machine learning techniques (e.g., deep-learning, spike, liquid networks) will be used to identify key fixations, saccades, and pupillary features as they relate to engagement and performance for building real-time engagement models.
Various time scales, features, and models will be explored to produce optimal real-time estimations of learner engagement.
Second, the research team will investigate how different activation functions can be used to trigger visual attention cues in response to user engagement levels (low, medium, high).
Human-centered interventions will assess how these functions influence learner ability to initiate, sustain, or regain engagement.
Third, the project will study the produced models using VR construction safety training sessions with industry professionals.
The effectiveness and transferability of real-time attention guidance will be evaluated to improve VR trainings.
Ultimately, these thrusts will advance our scientific understanding of gaze-based engagement modeling to learn construction safety materials, use AI to guide visual attention, and improve VR tools for workforce training outcomes.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.
Subawards are not planned for this award.
Awardee
Funding Goals
THE GOAL OF THIS FUNDING OPPORTUNITY, "COMPUTER AND INFORMATION SCIENCE AND ENGINEERING (CISE): CORE PROGRAMS", IS IDENTIFIED IN THE LINK: HTTPS://WWW.NSF.GOV/PUBLICATIONS/PUB_SUMM.JSP?ODS_KEY=NSF23561
Grant Program (CFDA)
Awarding / Funding Agency
Place of Performance
Tempe,
Arizona
85287-6011
United States
Geographic Scope
Single Zip Code
Related Opportunity
Arizona State University was awarded
Project Grant 2436881
worth $260,000
from the Division of Information and Intelligent Systems in August 2025 with work to be completed primarily in Tempe Arizona United States.
The grant
has a duration of 2 years 4 months and
was awarded through assistance program 47.070 Computer and Information Science and Engineering.
The Project Grant was awarded through grant opportunity Computer and Information Science and Engineering (CISE): Core Programs.
Status
(Ongoing)
Last Modified 8/12/25
Period of Performance
8/1/25
Start Date
12/31/27
End Date
Funding Split
$260.0K
Federal Obligation
$0.0
Non-Federal Obligation
$260.0K
Total Obligated
Activity Timeline
Additional Detail
Award ID FAIN
2436881
SAI Number
None
Award ID URI
SAI EXEMPT
Awardee Classifications
Public/State Controlled Institution Of Higher Education
Awarding Office
490502 DIV OF INFOR INTELLIGENT SYSTEMS
Funding Office
490502 DIV OF INFOR INTELLIGENT SYSTEMS
Awardee UEI
NTLHJXM55KZ6
Awardee CAGE
4B293
Performance District
AZ-04
Senators
Kyrsten Sinema
Mark Kelly
Mark Kelly
Modified: 8/12/25