Lauren H. Perry

Senior Project Engineer

The Aerospace Corporation

Lauren H Perry
Sr Project Engineer, Space Applications Group
Ms. Perry’s work with The Aerospace Corporation incorporates AI/ML technologies into traditional software development programs for the IC, DoD, and commercial customers. Previously, she was the analytical lead for a DoD project established to improve joint interoperability within the Integrated Air and Missile Defense (IAMD) Family of Systems and enhance air warfare capability, and a Reliability Engineer at Lockheed Martin Space Systems Company. She has a background in experimental design, applied statistics, and statistical engineering for the aerospace domain.

 

Dr. Philip C Slingerland
Sr Engineering Specialist, Machine Intelligence and Exploitation Department
Dr. Slingerland’s work with The Aerospace Corporation focuses on machine learning and computer vision projects for a variety of IC, DoD, and commercial customers. Previously, he spent four years as a data scientist and software developer at Metron Scientific Solutions in support of many Naval Sea Systems Command (NAVSEA) studies. Dr. Slingerland has a background in sensor modeling and characterization, with a PhD in physics studying the performance of terahertz quantum cascade lasers (QCLs) for remote sensing applications.

Trust Throughout the Artificial Intelligence Lifecycle

AI and machine learning have become widespread throughout the defense, government, and commercial sectors. This has led to increased attention on the topic of trust and the role it plays in successfully integrating AI into highconsequence environments where tolerance for risk is low. Driven by recent successes of AI algorithms in a range of applications, users and organizations rely on AI to provide new, faster, and more adaptive capabilities. However, along with those successes have come notable pitfalls, such as bias, vulnerability to adversarial attack, and inability to perform as expected in novel environments. Many types of AI are data-driven, meaning they operate on and learn their internal models directly from data. Therefore, tracking how data were used to build data properties (e.g., training, validation, and testing) is crucial not only to ensure a high-performing model, but also to understand if the AI should be trusted. MLOps, an offshoot of DevSecOps, is a set of best practices meant to standardize and streamline the end-to-end lifecycle of machine learning. In addition to supporting the software development and hardware requirements of AI-based systems, MLOps provides a scaffold by which the attributes of trust can be formally and methodically evaluated. Additionally, MLOps encourages reasoning about trust early and often in the development cycle. To this end, we present a framework that encourages the development of AI-based applications that can be trusted to operate as intended and function safely both with and without human interaction. This framework offers guidance for each phase of the AI lifecycle, utilizing MLOps, through a detailed discussion of pitfalls resulting from not considering trust, metrics for measuring attributes of trust, and mitigations strategies for when risk tolerance is low.

https://dataworks.testscience.org/wp-content/uploads/sites/18/formidable/23/T1_Perry_Slingerland_-Trust-throughout-the-AI-Lifecycle_DATAWorks.pdf https://www.youtube.com/watch?v=Dfyov3h6i50&list=PLeZrxAVa0tJnTWZ_CZD-dMXoRWiLFNdhG&index=16