Search Contract Opportunities

Synthetic Aperture Radar (SAR) Image Generation Data Augmentation (SIGDA)

ID: DTRA21B-001 • Type: SBIR / STTR Topic • Match:  100%
Opportunity Assistant

Hello! Please let me know your questions about this opportunity. I will answer based on the available opportunity documents.

Please sign-in to link federal registration and award history to assistant. Sign in to upload a capability statement or catalogue for your company

Some suggestions:
Please summarize the work to be completed under this opportunity
Do the documents mention an incumbent contractor?
Does this contract have any security clearance requirements?
I'd like to anonymously submit a question to the procurement officer(s)
Loading

Description

RT&L FOCUS AREA(S): Artificial Intelligence/ Machine Learning TECHNOLOGY AREA(S): Battlespace; Sensors The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 3.5 of the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. OBJECTIVE: Develop a method to produce synthetic SAR data for augmentation into Artificial Intelligence (AI) Automatic Target Recognition (ATR) algorithms and assess improvement compared to current methods. Leverage existing radiative transfer models (RTMs) within the research community to create phased history as well as radar images from which specific features can be exploited for use in current ATR algorithms. Explore the use of state of the art artificial intelligence (AI) methods such as the Generative Adversarial Network (GAN) in producing realizable synthetic SAR data in conjunction with RTM results to further improve ATR training. DESCRIPTION: Current Standard Operating Procedures (SOPs) for SAR image analysis consists of manual processes that are labor intensive. SAR analysis currently requires a trained analyst with years of experience to accurately classify targets in a scene. Analysts cannot keep up with the amount of captured data that needs to be processed which has spawned attempts to push human capabilities [1]. The sheer volume of data from desperate systems produces a situation in which reviewing all collected imagery becomes an impossibility for the Intelligence Communities (ICs). Specifically for the Counter Weapons of Mass Destruction (CWMD) mission, foreign governments purposely take actions, such as moving locations and the use of remote sites that make it difficult for analysts to identify objects of interest. AI automated solutions have been proposed as a force multiplier with the potential to significantly increase the amount of actionable intelligence an analyst can produce [2]. Despite the promise that AI presents to the SAR analysis problem, training data for ATR algorithms is scarce. AI algorithms must first be trained on existing data in order to process and make classifications on new data. Finding quality data that meets the end goal of the algorithm is often the Achilles heel of ATR systems. Moreover, the training data must incorporate all possible aspects of the target, viewpoint, and scene making the task of creating a training set difficult and cumbersome. Images are often translated, rotated, cropped, and noise added in various ways to capture possibilities. However, creating such a dataset for SAR imagery on desired military targets is even more difficult, cost prohibitive, and impractical with the very limited available data. Instead, the use of RTMs for the creation of synthetic data has shown promise for ATR algorithms on other sensor modalities and can be extended to SAR [3]. A number of RTMs that have SAR capability already exist and should be further developed for the SAR synthetic data augmentation problem. Some of these models include RaySAR [4] CohRas [5], SARViz [6], and DIRSIG [7]. These systems were originally created with engineering studies in mind, for instance, sensor specifications, target characteristics, environmental conditions, platform properties, and so forth. Generally, RTMs are based on statistical ray-tracing techniques into a 3d scene description to predict at sensor radiance contributions from scene components. Scene descriptions can contain detailed information such as surface Bidirectional Reflectance Distribution Functions (BRDFs), textures, and spectral dependencies. Environmental conditions such as atmospheric propagations are also often incorporated with the use of models such as MODTRAN [8]. Sensor and antenna specifications such as power, frequency, and gain pattern are important parameters that are included for robust simulations. With the ability to create physically realizable SAR data, RTM outputs are well suited to solve the lack of training data problem for SAR ATR algorithms. ATR algorithms are aimed at solving the classification problem of objects in a scene. Convolution Neural Networks (CNNs) have become the most common method for difficult classification problems, and have proven to be highly effective due to their ability to hone in on local features in the vector space. CNNs are comprised of layered connections of convolutions with learned filters that enable neighboring semantic meanings, making it an ideal choice for image classification. A number of CNNs have been developed for the SAR classification problem with promising accuracy but often lack sufficient datasets [2] [9] [10]. One of the most recent studies on the creation of synthetic SAR data for augmentation into ATR algorithms looked at processing RTM visible imagery into SAR like imagery by using a GAN [11]. Although the study showed that important features were missing in the GAN produced synthetic imagery required to improve ATR accuracy, the researchers proposed that instead, RTMs should produce the SAR data directly, and a GAN then could be used to improve the realism of the SAR image. PHASE I: An in depth literature review comparing current SAR Radiative Transfer Models, data sets, and ATR algorithms is first required to understand the state of the art. An understanding of the advantages and disadvantages of the different available RTMs as well as their availability for use in this effort will be determined. An RTM will then be chosen, acquired, and used to produce synthetic SAR data, both phased history as well as imagery. SAR datasets will also be researched that contain objects of interest, one example being MSTAR [12]. An ATR algorithm will be chosen based on literature review results and availability. The ATR algorithm will be trained with the off the shelf data set and tested for accuracy. Training data will then be augmented from synthetically generated SAR data. Metrics, such as precision and recall will tracked to measure the increase in ATR performance with data augmentation. Deliver model, all software, data, and reports on the effort. PHASE II: Build upon lessons learned from phase I, pursuing efforts that show promise in SAR data augmentation. Research AI methods to enhance synthetic imagery such as usage of GAN algorithms. Implement AI and other synthetic imagery enhancements and test ATR improvements as a result of the enhancements. Produce TRL level 6 system by incorporating models into operational analytical tools and performing a technology demonstration. Metrics, such as precision and recall will be tracked to measure the increase in ATR performance with data augmentation. Deliver the system, model, all software, data, and reports on the effort. PHASE III DUAL USE APPLICATIONS: Finalize and commercialize software for use by customers (e.g. government, satellite companies, etc.). Although additional funding may be provided through DoD sources, the awardee should look to other public or private sector funding sources for assistance with transition and commercialization. REFERENCES: 1. R. A. McKinley, L. McIntire, N. Bridges, C. Goodyear, N. B. Bangera and M. P. Weisend, "Acceleration of image analyst training with trascranial direct current stimulation," Behavioral Neuroscience, vol. 127, no. 6, p. 936, 2013. ; 2. C. Coman, "A deep learning sar target classification experiment on mstar dataset," in International Radar Symposium, Bonn, Germany, 2018. ; 3. R. Kemker, C. Salvaggio and C. Kanan, "Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning," ISPRS journal of photogrammetry and remote sensing, no. 145, pp. 60-77, 2018. ; 4. S. Auer, R. Bamler and P. Reinartz, "RaySAR-3D SAR simulator: Now open source," IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2016. ; 5. H. Hammer, K. Hoffmann and K. Schulz, "On the classification of passenger cars in airborne SAR images using simulated training data and a convolutional neural network. Image and Signal Processing for Remote Sensing XXIV," International Society for Optics and Photonics, vol. 10789, 2018. ; 6. M. Gupta, V. Malhotra, B. Shah, S. Prakash, A. Sharma and B. Kartikeyan, "RISAT-1 SAR HRS Mode Data Quality Evaluation," IGARSS IEEE International Geoscience and Remote Sensing Symposium, 2019. ; 7. M. Gartley, A. Goodenough, S. Brown and R. Kauffman, "A comparison of spatial sampling techniques enabling first principles modeling of a synthetic aperture RADAR imaging platform," in SPIE Defense, Security, and Sensing, Orlando, FL, 2010. ; 8. A. Berk, L. Bernstein and D. C., "MODTRAN: A moderate resolution model for LOWTRAN.," Spectral Sciences Inc., Burlington, MA, 1987. ; 9. H. S. Pannu and A. Malhi, "Deep learning-based explainable target classification for synthetic aperture radar images," in 13th International Conference on Huan System Interaction IEEE, Tokyo, Japan, 2020. ; 10. E. G. john, "Convolutional Neural Networks For Feature Extraction and Authomated Target Recognition in Synthetic Aperture Radar Images," Naval Postgraduate School, Monterey, CA, 2020. ; 11. J. Slover, "Synthetic Aperture Radar Simulation by Electro Optical to SAR Transformation Using Generative Adversarial Network," Rochester Institute of Technology, Rochester, NY, 2020. ; 12. T. D. Ross, W. Steven, V. J. Velten, J. C. Mossing and M. L. Bryant, "Standard SAR ATR evaluation experimetns using the MSTAR public release data set," in Algorithms for Sythetic Aperture Radar Imagery V. , Orlando, FL, 1998.

Overview

Response Deadline
June 17, 2021 Past Due
Posted
April 21, 2021
Open
May 19, 2021
Set Aside
Small Business (SBA)
Place of Performance
Not Provided
Source
Alt Source

Program
STTR Phase I
Structure
Contract
Phase Detail
Phase I: Establish the technical merit, feasibility, and commercial potential of the proposed R/R&D efforts and determine the quality of performance of the small business awardee organization.
Duration
1 Year
Size Limit
500 Employees
Eligibility Note
Requires partnership between small businesses and nonprofit research institution
On 4/21/21 Defense Threat Reduction Agency issued SBIR / STTR Topic DTRA21B-001 for Synthetic Aperture Radar (SAR) Image Generation Data Augmentation (SIGDA) due 6/17/21.

Documents

Posted documents for SBIR / STTR Topic DTRA21B-001

Question & Answer

The AI Q&A Assistant has moved to the bottom right of the page

Contract Awards

Prime contracts awarded through SBIR / STTR Topic DTRA21B-001

Incumbent or Similar Awards

Potential Bidders and Partners

Awardees that have won contracts similar to SBIR / STTR Topic DTRA21B-001

Similar Active Opportunities

Open contract opportunities similar to SBIR / STTR Topic DTRA21B-001