TECH FOCUS AREAS: Autonomy; Artificial Intelligence/Machine Learning TECHNOLOGY AREAS: Information Systems; Battlespace OBJECTIVE: The objective of this topic is to explore the development of transparency methods to promote common ground and calibrated trust between human and machine partners across a human-autonomy-team's (HAT) lifecycle. This topic will reach companies that can complete a feasibility study and prototype validated concepts in accelerated Phase II schedules. This topic is specifically aimed at later stage development rather than earlier stage basic science and research. Successful proposals will not only demonstrate useful concepts for transparency in HAT contexts, but will discuss the dynamic transparency needs across the HAT lifecycle. DESCRIPTION: Transparency, i.e., methods for establishing shared awareness and shared intent between humans and intelligent machines, is the cornerstone for generating and maintaining appropriate trust in human-autonomy teaming interactions. Transparency methods have predominately taken a static view of the HAT relationship and have focused on one element of complexity often times selecting the task execution phase (for a particular isolated task) as the key opportunity for engineering transparency into an interface. However, transparency methods should incorporate the full spectrum of the HAT relationship spanning the gamut from design; testing; technology introduction; training/learning/co-learning; implementation in a task context; and full implementation in a social context (wherein trust can exist as a shared social phenomenon). This issue is exacerbated when considering machine learning technologies that: ingest diverse data sets during training (with various levels of bias inherent to them), are inherently nondeterministic, and are often brittle to environmental change/novel data. Another intelligent entity could be described in similar terms as the first two issues (diverse training data and nondeterministic) while it also displays incredible resilience to environmental perturbation human beings. While humans are highly variable, one might argue that our trust relationships with others are driven by stable factors granted the values/importance of these factors change overtime in relation to a particular referent, yet the set remains constant in driving trust of others because it offers us an opportunity to have common ground with others and to anticipate their intent in relation to us. The same way humans do not hug a stranger and or recite one's detailed resume to a close friend, neither should we view HAT relationships as static and universal the level of detail one needs in order to make an informed trust-based evaluation depends on a number of factors and we need transparency methods to convey this information and a higher-level model of transparency to guide the timing and implementation of these components. Example research could include: Interface methods to promote common ground of machine rationale/task awareness/states (e.g., confusion), predictability of machine intent, predictability of machine action, predictability of transfer of authority requests. Methods to categorize and compare learning affordances for ML systems as a means to facilitate transparency of the data, scenarios, and biases that exist in developing a ML algorithm this essentially looks to the learning environment itself as a facet of transparency. Identify how transparency could be elicited via design (i.e., pedigree). Testing methods to generate shared knowledge of HAT limitations and capabilities (e.g., joint HAT experiences to fulfill human trust needs) and those to limit uncertainty and constrain the consequences of errors (e.g., assurance methods to govern HAT behaviors and minimize uncertainty). Integrated lifecycle approaches and models guide the appropriateness and effectiveness of one transparency over another dynamically. PHASE I: Phase I should completely document 1) the AI-driven explainability requirements the proposed solution addresses; 2) the approach to model, quantify and analyze the representation, effectiveness, and efficiency of the explainable decision-making solution; and 3) the feasibility of developing or simulating a prototype architecture. PHASE II: Develop and demonstrate a prototype system determined to be the most feasible solution during the Phase I feasibility study. This demonstration should focus specifically on: 1. Evaluating the proposed solution against the objectives and measurable key results as defined in the Phase I feasibility study. 2. Describing in detail how the solution can be scaled to be adopted widely (i.e. how can it be modified for scale). 3. A clear transition path for the proposed solution that takes into account input from all affected stakeholders including but not limited to: end users, engineering, sustainment, contracting, finance, legal, and cyber security. 4. Specific details about how the solution can integrate with other current and potential future solutions. 5. How the solution can be sustainable (i.e. supportability). 6. Clearly identify other specific DoD or governmental customers who want to use the solution. PHASE III DUAL USE APPLICATIONS: The contractor will pursue commercialization of the various technologies developed in Phase II for transitioning expanded mission capability to a broad range of potential government and civilian users and alternate mission applications. Direct access with end users and government customers will be provided with opportunities to receive Phase III awards for providing the government additional research & development, or direct procurement of products and services developed in coordination with the program. PROPOSAL PREPARATION AND EVALUATION: Please follow the Air Force-specific Direct to Phase II instructions under the Department of Defense 21.2 SBIR Broad Agency Announcement when preparing proposals. Proposals under this topic will have a maximum value of $1,500,000 SBIR funding and a maximum performance period of 18 months, including 15 months technical performance and three months for reporting. Phase II proposals will be evaluated using a two-step process. After proposal receipt, an initial evaluation will be conducted IAW the criteria DoD 21.2 SBIR BAA, Sections 6.0 and 7.4. Based on the results of that evaluation, Selectable companies will be provided an opportunity to participate in the Air Force Trusted AI Pitch Day, tentatively scheduled for 26-30 July 2021 (possibly virtual). Companies' pitches will be evaluated using the initial proposal evaluation criteria. Selectees will be notified after the event via email. Companies must participate in the pitch event to be considered for award. REFERENCES: 1. Miller, C. (2021). Trust, transparency, explanation, and planning: Why we need a lifecycle perspective on human-automation interaction. In Nam and Lyons (Eds.), Trust in Human-Robot Interaction (pp. 234-254). Elsevier 2. Chen, J., Flemisch, F., Lyons, J., & Neerincx, M. (2020). Guest Editorial: Agent and System Transparency. IEEE Transactions on Human-Machine Systems, 50(3). 3. Chen J., Lakhmani S., Stowers K., Selkowitz A., Wright J., Barnes M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues Ergonomics Science 19(3), 259 282.