OUSD (R&E) MODERNIZATION PRIORITY: Autonomy; Artificial Intelligence/Machine Learning TECHNOLOGY AREA(S): Sensors DOES THIS NEED ITAR? NO OBJECTIVE: Develop and demonstrate a solution for opportunistic position updates from an existing onboard camera turret mounted on group 2 or group 3 unmanned aerial systems (UAS) to enable operation in Global Positioning System (GPS) and Global Navigation Satellite System (GNSS) denied operating environments. Ideally, the solution should be accurate to within 50m and does not require installation of additional sensors. DESCRIPTION: Accuracy, availability, and integrity of Positioning, Navigation, and Timing (PNT) information from GPS and other GNSS is under constant threat from denial and deception techniques. The concern of overreliance on GPS/GNSS systems has spurred a surge in alternative Positioning, Navigation, and Timing (Alt-PNT) research. Many of these tools and techniques are restricted to certain missions or environments to operate effectively. Providing resilient PNT for small UAS (sUAS) is particularly difficult due to significant size, weight, and power (SWAP) constraints. Any additional PNT payload added to sUAS will force the platform to trade off primary mission payload, reducing both capability and loiter time. This effort will leverage existing sensors for any positioning information that can be provided when GPS/GNSS based navigation is denied. Such vision-based systems currently on sUASs rely on similar requirements as vision based navigation systems. They require minimal cloud cover and visible terrain containing features, therefore these conditions can be assumed for a majority of the mission. It is also assumed that the camera turret settings will not always be ideal for image based navigation so the navigation algorithm should notify the operator when the navigation solution is degraded, meaning the camera settings and aim need to be adjusted to provide an image useful for solving for a position. PHASE I: This topic is meant to be awarded directly into a Phase II as the technology has been proven out. This topic incorporates existing gimbaled cameras with image navigation to produce an image based position estimate for navigation in GPS denied or degraded environments. Gimbaled cameras have been proven out on a variety of active inventory unmanned aerial systems (UAS) used in today's conflicts including those on the MQ-1, MQ-9, and a variety of smaller UASs used by SOCOM. This topic is geared towards small UASs that are both in development and operationally deployed. Image navigation has been proven in both the civilian academic world and within the DoD and defense contractors. There has been developments in using gimbaled sensors to produce position updates on the US LITENING Advanced Targeting Pod and the F-35 electro-optical distributed aperture system. Both of these systems have processing power and technological capabilities beyond what is found on smaller UASs. This topic will use existing gimbaled optical sensors on small UASs combined with image navigation techniques to produce navigation updates by adding no or very minimal hardware. PHASE II: By using only the existing gimbaled optical sensor and onboard mission computer on a small UAS a position update accuracy within 50 meters should be achieved and provided to the UAS navigation system. Current commercial off the shelf (COTS) systems require the addition of external cameras and processing computers which will not fit on existing operational small UASs. This topic will leverage existing hardware on small UASs currently in development or operationally deployed. The 645th Aeronautical Systems Group (Big Safari), who support SOCOM, has expressed great interest in the added navigation capability without having to modify the hardware on their small UASs. With little to no modifications required to current class 2 and 3 UASs this topic will easily transition to the warfighter through the 645th and various remotely piloted aircraft (RPA) system program offices (SPO). For the proposal the following vignette depicts the robustness and performance that is required: A group 2 UAS with an Intelligence, Surveillance, and Reconnaissance (ISR) mission is launched; in a GPS/GNSS denied environment. The UAS must approach a target of interest tens of kilometers away expecting a total mission duration of 3 or more hours. The operation may occur in daylight or darkness. It is assumed an initial position and time are either entered by hand or transferred from a host platform. En route the camera turret operator will be able to point the camera turret to look for interesting features based on feedback from the navigation algorithm. While performing the primary ISR mission the sensor operator is notified when position accuracy is degraded and the camera turret should be used to obtain an additional position estimate. The aircraft will maintain a reliable command and control (C2) link to the operator throughout the mission. The following features must be considered for a proposal The onboard camera turret will be the primary sensor used to perform a position update during a GPS/GNSS outage. Images, pointing angles, and settings metadata can be read off the camera turret. The navigation algorithm should work without direct control of the sensor turret. The algorithm can encourage the operator to re-point but it is not guaranteed. It is understood that a position solution from the camera turret will only be available intermittently depending on what the camera is currently seeing. The desire is a software only solution. Additional payload should be zero or minimal, although it is understood that installation of a dedicated processor may be necessary. Any additional hardware must fit within a typical group 2 UAS payload bay. The algorithm must be compatible with operational group 2 and group 3 UAS camera turrets. GPS/GNSS may be unavailable at takeoff. At a minimum, a rough manual position and time estimate will be available. Both traditional and machine learning approaches may be considered. However the underlying uncertainty metrics of all measurements/estimates must be fully understood and accurately represented. System must provide both a position solution and associated raw measurements. The algorithm should output its solution in the All Source Positioning and Navigation (ASPN) format. The solution should be built as a module that can be integrated into a government-owned open architecture PNT filter. No government furnished equipment (GFE) will be provided. Availability of required reference data must be taken into consideration. PHASE III DUAL USE APPLICATIONS: If a successful Phase II solution is developed, a Phase III will quickly transition the developed technology to meet any specific needs of the individual customers within the DoD, other government agencies, or the civilian world. The technology could also be expanded to manned aircraft with camera turrets to aid in navigation in a contested, degraded and operationally limited (CDO) environment. The commercial sector could us this technology as an alternate to GPS during outage periods or when traversing hostile areas. The image matching algorithms could be used for time based surveying to track farming, animal, and land management trends. The development of this topic could be incorporated into the USAF Vanguards via a gimbaled sensor on a Golden Horde munition or Skyborg aircraft with no hardware changes. REFERENCES: G. Conte and P. Doherty, An integrated UAV navigation system based on aerial image matching," in IEEE Aerospace Conference Proceedings, 2008.; T. Machin, Real-time implementation of vision-aided monocular navigation for Small fixed-wing unmanned aerial systems, quot; Masters thesis, Air Force Institute of Technology, 2016.; B. W. Randal and T. W. McLain, Small Unmanned Aircraft Theory and Practice.; Keskin, Ali.; Fixed Wing UAV Target Geolocation Estimation From Camera Images. 2021. KEYWORDS: Navigation; Position; UAS; UAV; SUAS; Position Update; CDO; Alternative Navigation; Gimbaled Camera; Image Matching