Defense and Aerospace Test and Analysis Workshop

 

Virtual Presentations Scheduled

Workshop Overview

The Defense and Aerospace Test and Analysis (DATA) Workshop is the result of a multi-organization collaboration with the Director of Operational Test & Evaluation (DOT&E) within the Office of the Secretary of Defense, National Aeronautics and Space Administration (NASA), the Institute for Defense Analyses (IDA), and the Section on Statistics in Defense and National Security (SDNS) of the American Statistical Association (ASA). The workshop is strategically designed to strengthen the community by applying rigorous statistical approaches to test design and data analysis in the fields of defense and aerospace.
The three-day workshop will showcase a combination of applied problems, unique methodological approaches, and tutorials from leading academics.
Our goal is to facilitate collaboration among all involved, including other government agencies.

Progress and Challenges in Program Evaluation

The Department of Defense (DoD) and NASA develop and acquire some of the world’s most sophisticated technological systems. In cooperation with these organizations, we face challenges in ensuring that these systems undergo testing and evaluation (T&E) that is both adequate and efficient prior to their use in the field.
Sessions with this theme will feature case study examples highlighting advancements in T&E methodologies that we have made as a community.
Topics covered may include modeling and simulation validation, uncertainty quantification, experimental design usage, Bayesian analysis, among others. Sessions will also discuss the current challenges and areas in the field that require more research.

Machine Learning and Artificial Intelligence

With the future of automation trending towards using machine learning (ML) and artificial intelligence (AI), we aim to provide resources, set precedent, and inspire innovation in these fields.
Sessions with this theme will discuss how we can quantify confidence that ML and/or AI algorithms function as intended. To do this we explore whether the functions are free of vulnerabilities that are either intentionally or unintentionally integrated as part of the data or algorithm.
Topics covered may include testing and evaluation (T&E) metrics and methods for AI algorithms, T&E metrics and methods for cyber-physical systems with AI algorithms, threat portrayal for ML/AI algorithms, and robust ML/AI algorithm design.

Cybersecurity and Software Test & Evaluation

The number of known cyber-attacks has been steadily increasing over the past ten years, exposing millions of data records. With the help of leading minds in the cyber field, we work to not only to discover new ways of identifying these attacks, but also ways to counter future attacks.
Sessions with this theme will discuss metrics and test methodologies for testing cyber-physical systems with an emphasis on characterizing cyber resiliency.
Topics covered may include identifying cybersecurity threats (IP and non-IP) across critical operational missions, development and testing secure architectures, identifying critical system components that enable attack vectors, developing and testing countermeasures to cyber-attacks, and testing methods for cyber-physical systems that include machine learning algorithms.

Modeling the Human-System Interaction

Mission outcomes are heavily determined by how effectively operators interact with systems under operational conditions as decided by their objective performance and their perceived performance. In order to effectively interpret these systems, we analytically observe team dynamics to predict how changing operating conditions will affect the dynamic and therefore productivity.
Sessions with this theme will discuss methods used to collect and model the quality of human-system interaction and its impact on operational performance.
Topics covered will include survey development/test design, statistical methods for surveys, scale validation, and network analysis.

Science Applications of Test and Evaluation

Scientists use a variety of tools to perform analyses within their areas of expertise. Our goal is to demonstrate knowledge of how to improve analyses that they already perform and to introduce new tools to increase analysis efficiency.
Sessions with this theme will cover simulation, prediction, uncertainty quantification, and inference for physical and physical-statistical modeling of geophysical processes.
Topics covered may include Earth Science problems (e.g. ice sheet evolution models, atmospheric and ocean processes, the carbon cycle, land surface processes, natural hazards), astronomy and cosmology (e.g. exoplanet detection, galactic formation, cosmic microwave background), and planetary science (e.g. planetary atmospheres, formation).

Current Status

No In-person Event – After careful consideration of the evolving concerns around COVID-19, we have made the very difficult decision to cancel the in-person DATAWorks event previously scheduled for March 31-April 2 in Northern Virginia.

Webinar Agenda – We are building a virtual experience that will enable us to bring you some of the great presentations you were looking forward to. The latest webinar agenda is posted below.  We will continue to update our website and send out email correspondence as the schedule develops – so be sure to click Subscribe in the menu bar for the latest news!

Webinar Agenda

March 31, 2020
1:00 PM-2:30 PM

A previously recorded version of this seminar is available for viewing:

https://www.youtube.com/watch?v=XxqVPzb_sGM&feature=youtu.be.

A Practical Introduction To Gaussian Process Regression
Robert “Bobby” Gramacy, Virginia Tech | Materials: | Materials URL:

Abstract: Gaussian process regression is ubiquitous in spatial statistics, machine learning, and the surrogate modeling of computer simulation experiments.  Fortunately their prowess as accurate predictors, along with an appropriate quantification of uncertainty, does not derive from difficult-to-understand methodology and cumbersome implementation.  We will cover the basics, and provide a practical tool-set ready to be put to work in diverse applications.  The presentation will involve accessible slides authored in Rmarkdown, with reproducible examples spanning bespoke implementation to add-on packages.

Instructor Bio: Robert Gramacy is a Professor of Statistics in the College of Science at Virginia Polytechnic and State University (Virginia Tech). Previously he was an Associate Professor of Econometrics and Statistics at the Booth School of Business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty.  Professor Gramacy is a computational statistician. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. Such endeavors often require new models, new methods, and new algorithms. His goal is to be impactful in all three areas while remaining grounded in the needs of a motivating application. His aim is to release general purpose software for consumption by the scientific community at large, not only other statisticians.  Professor Gramacy is the primary author on six R packages available on CRAN, two of which (tgp, and monomvn) have won awards from statistical and practitioner communities.

April 2, 2020
1:00 PM-2:30 PM

Note the webinar start and end time is Eastern Standard Time. 

  • Meeting information was sent to subscribers.
  • If you have not received this information, please contact dataworks@testscience.org no later than  12:30 EST on April 2. Please provide your name, organization, and email address. 

Introduction to Uncertainty Quantification for Practitioners and Engineers
Gavin Jones, SmartUQ

Uncertainty is an inescapable reality that can be found in nearly all types of engineering analyses. It arises from sources like measurement inaccuracies, material properties, boundary and initial conditions, and modeling approximations. Uncertainty Quantification (UQ) is a systematic process that puts error bands on results by incorporating real world variability and probabilistic behavior into engineering and systems analysis. UQ answers the question: what is likely to happen when the system is subjected to uncertain and variable inputs. Answering this question facilitates significant risk reduction, robust design, and greater confidence in engineering decisions. Modern UQ techniques use powerful statistical models to map the input-output relationships of the system, significantly reducing the number of simulations or tests required to get accurate answers.

This tutorial will present common UQ processes that operate within a probabilistic framework. These include statistical Design of Experiments, statistical emulation methods used to create the simulation inputs to response relationship, and statistical calibration for model validation and tuning to better represent test results. Examples from different industries will be presented to illustrate how the covered processes can be applied to engineering scenarios. This is purely an educational tutorial and will focus on the concepts, methods, and applications of probabilistic analysis and uncertainty quantification. SmartUQ software will only be used for illustration of the methods and examples presented. This is an introductory tutorial designed for practitioners and engineers with little to no formal statistical training. However, statisticians and data scientists may also benefit from seeing the material presented from a more practical use than a purely technical perspective.

There are no prerequisites other than an interest in UQ. Attendees will gain an introductory understanding of Probabilistic Methods and Uncertainty Quantification, basic UQ processes used to quantify uncertainties, and the value UQ can provide in maximizing insight, improving design, and reducing time and resources.

April 10, 2020
1:00 PM-2:30 PM

Note the webinar start and end time is Eastern Standard Time.

 

Meeting Information Coming Soon 


I Have the Power! Power Calculation in Complex (and Not So Complex) Modeling . . . Situations
Caleb King and Ryan Lekivetz, JMP Division, SAS Institute Inc.

Invariably, any analyst who has been in the field long enough has heard the dreaded questions: “Is X number of samples enough? How much data do I need for my experiment” Ulterior motives aside, any investigation involving data must ultimately answer the question of “How many?” to avoid risking either insufficient data to detect a scientifically significant effect or having too much data leading to a waste of valuable resources. This can become particularly difficult when the underlying model is complex (e.g. longitudinal designs with hard-to-change factors, time-to-event response with censoring, binary responses with non-uniform test levels, etc.). Even in the supposedly simpler case of categorical factors, where run size is often chosen using a lower bound power calculation, a simple approach can mask more “powerful” techniques. In this tutorial, we will spend the first half exploring how to use simulation to perform power calculations in complex modeling situations drawn from relevant defense applications. Techniques will be illustrated using both R and JMP Pro. In the second half, we will investigate the case of categorical factors and illustrate how treating the unknown effects as random variables induces a distribution on statistical power, which can then be used as a new way to assess experimental designs.