R21CA274717
Project Grant
Overview
Grant Description
Mobile phone-based deep learning algorithm for oral lesion screening in low-resource settings - Two-thirds of oral and oropharyngeal squamous cell carcinomas (OSCCs) occur in low- and middle-income countries (LMICs), with 5-year survival rates of only 10-40%. The poor survival rate in LMICs is due to late diagnosis and treatment. Thus, it is imperative to detect potentially malignant lesions early and expeditiously.
To meet the need for oral cancer screening in low resource settings (LRS), we will develop and validate a low-cost mobile phone-based imaging device powered by computer vision and deep learning image classification algorithms to guide patient triage. We are a multi-institutional team comprising of optical imaging and machine learning engineers and oral/head-neck oncologists, at the University of Arizona, Memorial Sloan Kettering Cancer Center and Tata Memorial Hospital (TMH, Mumbai, as the LMIC setting).
In preliminary studies, our team has developed and tested the hardware: a dual-mode polarized white light imaging (PWLI) and autofluorescence imaging (AFI) mobile device. Non-expert field healthcare workers read images with (low) sensitivity of 60%. Additionally, a preliminary deep learning classification algorithm, implemented on a cloud-based server computer, demonstrated improved sensitivity of 79% and specificity of 82%.
Our proposal is to address the key remaining hurdle - improving the reading skills of non-expert field healthcare workers - locally in LRS in LMICs, which do not have internet and cloud connectivity. We will develop and validate the required software: machine learning (deep learning) image classification algorithm on a mobile phone, to guide field healthcare workers in triage of oral lesions into benign (patients can go home) versus suspicious (patients referred to clinician for follow up care). The innovations will be in design and integration of computer vision (image mosaicking) and deep learning classification algorithms on a mobile phone-based imaging device, to provide high accuracy and consistency for screening.
Novel aspects will be in (i) the deep learning approach for dual-mode image contrast: PWLI contrast for color and texture of normal features (increasing specificity) and AFI contrast associated with malignancy (increasing sensitivity) and in (ii) engineering of the algorithm for use on mobile devices, via teacher student learning-based knowledge distillation techniques.
The clinical innovation will be first-in-humans testing for improvements in sensitivity and specificity relative to that of purely visual interpretation, for routine use by non-expert field healthcare workers in LRS. In the R21 project, we will develop a mobile deep learning-based oral lesion screening and patient triage algorithm and demonstrate feasibility in a cancer care setting (TMH's main hospital in Mumbai). In the R33 project, we will optimize the algorithm, test and validate in a large study in a field setting at TMH's regional clinic in Varanasi.
Successful completion of this project will deliver urgently needed capabilities to field healthcare workers in LRS, for early detection and triage of oral potentially malignant lesions, improving early oral cancer detection rates, allowing timely referral to specialists, improving treatment outcomes and improving quality of life for patients in LMICs.
To meet the need for oral cancer screening in low resource settings (LRS), we will develop and validate a low-cost mobile phone-based imaging device powered by computer vision and deep learning image classification algorithms to guide patient triage. We are a multi-institutional team comprising of optical imaging and machine learning engineers and oral/head-neck oncologists, at the University of Arizona, Memorial Sloan Kettering Cancer Center and Tata Memorial Hospital (TMH, Mumbai, as the LMIC setting).
In preliminary studies, our team has developed and tested the hardware: a dual-mode polarized white light imaging (PWLI) and autofluorescence imaging (AFI) mobile device. Non-expert field healthcare workers read images with (low) sensitivity of 60%. Additionally, a preliminary deep learning classification algorithm, implemented on a cloud-based server computer, demonstrated improved sensitivity of 79% and specificity of 82%.
Our proposal is to address the key remaining hurdle - improving the reading skills of non-expert field healthcare workers - locally in LRS in LMICs, which do not have internet and cloud connectivity. We will develop and validate the required software: machine learning (deep learning) image classification algorithm on a mobile phone, to guide field healthcare workers in triage of oral lesions into benign (patients can go home) versus suspicious (patients referred to clinician for follow up care). The innovations will be in design and integration of computer vision (image mosaicking) and deep learning classification algorithms on a mobile phone-based imaging device, to provide high accuracy and consistency for screening.
Novel aspects will be in (i) the deep learning approach for dual-mode image contrast: PWLI contrast for color and texture of normal features (increasing specificity) and AFI contrast associated with malignancy (increasing sensitivity) and in (ii) engineering of the algorithm for use on mobile devices, via teacher student learning-based knowledge distillation techniques.
The clinical innovation will be first-in-humans testing for improvements in sensitivity and specificity relative to that of purely visual interpretation, for routine use by non-expert field healthcare workers in LRS. In the R21 project, we will develop a mobile deep learning-based oral lesion screening and patient triage algorithm and demonstrate feasibility in a cancer care setting (TMH's main hospital in Mumbai). In the R33 project, we will optimize the algorithm, test and validate in a large study in a field setting at TMH's regional clinic in Varanasi.
Successful completion of this project will deliver urgently needed capabilities to field healthcare workers in LRS, for early detection and triage of oral potentially malignant lesions, improving early oral cancer detection rates, allowing timely referral to specialists, improving treatment outcomes and improving quality of life for patients in LMICs.
Funding Goals
NOT APPLICABLE
Grant Program (CFDA)
Awarding / Funding Agency
Place of Performance
New York,
New York
100656007
United States
Geographic Scope
Single Zip Code
Related Opportunity
Analysis Notes
Amendment Since initial award the total obligations have increased 71% from $205,492 to $351,680.
Sloan-Kettering Institute For Cancer Research was awarded
Mobile Phone-Based Oral Lesion Screening in Low-Resource Settings
Project Grant R21CA274717
worth $351,680
from National Cancer Institute in June 2023 with work to be completed primarily in New York New York United States.
The grant
has a duration of 2 years and
was awarded through assistance program 93.393 Cancer Cause and Prevention Research.
The Project Grant was awarded through grant opportunity Mobile Health: Technology and Outcomes in Low and Middle Income Countries (R21/R33 - Clinical Trial Optional).
Status
(Complete)
Last Modified 6/5/24
Period of Performance
6/7/23
Start Date
5/31/25
End Date
Funding Split
$351.7K
Federal Obligation
$0.0
Non-Federal Obligation
$351.7K
Total Obligated
Activity Timeline
Subgrant Awards
Disclosed subgrants for R21CA274717
Transaction History
Modifications to R21CA274717
Additional Detail
Award ID FAIN
R21CA274717
SAI Number
R21CA274717-3243783747
Award ID URI
SAI UNAVAILABLE
Awardee Classifications
Public/State Controlled Institution Of Higher Education
Awarding Office
75NC00 NIH NATIONAL CANCER INSTITUTE
Funding Office
75NC00 NIH NATIONAL CANCER INSTITUTE
Awardee UEI
KUKXRCZ6NZC2
Awardee CAGE
6X133
Performance District
NY-12
Senators
Kirsten Gillibrand
Charles Schumer
Charles Schumer
Budget Funding
Federal Account | Budget Subfunction | Object Class | Total | Percentage |
---|---|---|---|---|
National Cancer Institute, National Institutes of Health, Health and Human Services (075-0849) | Health research and training | Grants, subsidies, and contributions (41.0) | $204,492 | 100% |
Modified: 6/5/24