DATAWorks 2023 Agenda

April 25th Agenda

Audience Legend

  •  1 

    Everyone.

    These talks should be accessible to everyone regardless technical background.

  •  2 

    Practitioners.

    These talks might include case studies with some discussion of methods and coding, but largely accessible to a non-technical audience.

  •  3 

    Technical Experts.

    These talks will likely delve into technical details of methods and analytical computations and are primarily aimed at practitioners advancing the state of the art.

7:30 AM – 8:45 AM

Registration / Check-in

9:00 AM – 12:00 PM: Parallel Sessions

Room A

Virtual Session
Short Course 1, Part 1
  •  2 

    Introduction to Machine Learning

    Stephen Adams (Virginia Tech National Security Institute)

Room B

Virtual Session
Short Course 2, Part 1
  •  2 

    Applied Bayesian Methods for Test Planning and Evaluation

    Victoria Sieck (STAT COE/AFIT)

  • Cory Natoli (Huntington Ingalls Industries/STAT COE)

    Corey Thrush (Huntington Ingalls Industries/STAT COE)

Room C

Short Course 3, Part 1
  •  1 

    Present Your Science

    Melissa Marshall (Present Your Science)

Room E

Short Course 4, Part 1
  •  2 

    Plotting and Programming in Python

    Software Carpentry Instructor (The Carpentries)

12:00 PM – 1:00 PM

Lunch

1:00 PM – 4:00 PM: Parallel Sessions

Room A

Virtual Session
Short Course 1, Part 2
  •  2 

    Introduction to Machine Learning

    Stephen Adams (Virginia Tech National Security Institute)

Room B

Virtual Session
Short Course 2, Part 2
  •  2 

    Applied Bayesian Methods for Test Planning and Evaluation

    Victoria Sieck (STAT COE/AFIT)

  • Cory Natoli (Huntington Ingalls Industries/STAT COE)

    Corey Thrush (Huntington Ingalls Industries/STAT COE)

Room C

Short Course 3, Part 2
  •  1 

    Present Your Science

    Melissa Marshall (Present Your Science)

Room E

Short Course 4, Part 2
  •  2 

    Plotting and Programming in Python

    Software Carpentry Instructor (The Carpentries)

April 26th Agenda

Audience Legend

  •  1 

    Everyone.

    These talks should be accessible to everyone regardless technical background.

  •  2 

    Practitioners.

    These talks might include case studies with some discussion of methods and coding, but largely accessible to a non-technical audience.

  •  3 

    Technical Experts.

    These talks will likely delve into technical details of methods and analytical computations and are primarily aimed at practitioners advancing the state of the art.

7:30 AM – 8:30 AM

Registration / Check-in

8:30 AM – 8:40 AM

Room A+B

Opening Remarks

8:40 AM – 9:20 AM

Room A+B

Keynote 1

Shawn N. Bratton

(United States Space Force)


9:20 AM – 10:00 AM

Room A+B

Keynote 2

Peter Coen

(NASA)


10:00 AM – 10:20 AM

Break

10:20 AM – 12:20 PM: Parallel Sessions

Room A

Virtual Session
Session 1A: Cybersecurity in Test & Evaluation
Session Chair: Mark Herrera, IDA
  •  1 

    Planning for Public Sector Test and Evaluation in the Commercial Cloud

    Brian Conway (IDA)

  •  2 

    Cyber Testing Embedded Systems with Digital Twins

    Michael Thompson (Naval Postgraduate School)

  •  2 

    Test and Evaluation of AI Cyber Defense Systems

    Shing-hon Lau (Carnegie Mellon University — Software Engineering Institute)

  •  1 

    Test and Evaluation of Systems with Embedded Artificial Intelligence Components

    Michael R. Smith (Sandia National Laboratories)

Room B

Virtual Session
Session 1B:
Session Chair: Tom Donnelly, JMP Statistical Discovery
  •  1 

    On the Validation of Statistical Software

    Ryan Lekivetz (JMP Statistical Discovery)

  •  1 

    Validating the Prediction Profiler with Disallowed Combination: A Case Study

    Yeng Saanchi (JMP Statistical Discovery)

  •  2 

    Introducing Self-Validated Ensemble Models (SVEM) – Bringing Machine Learning to DOEs

    Chris Gotwalt (JMP Statistical Discovery)

  •  1 

    Effective Application of Self-Validated Ensemble Models in Challenging Test Scenarios

    James Wisnowski (Adsurgo)

Room C

Session 1C: Statistical and Systems Engineering Applications in Aerospace
Session Chair: TBD
  •  2 

    The Containment Assurance Risk Framework of the Mars Sample Return Program

    Giuseppe Cataldo (NASA)

  •  3 

    Large-scale cross-validated Gaussian processes for efficient multi-purpose emulators

    Jouni Susiluoto (NASA Jet Propulsion Laboratory, California Institute of Technology)

  •  2 

    Coming Soon

    Cosmin Safta (Sandia National Laboratories)

  •  1 

    Systems Engineering Applications of UQ in Space Mission Formulation

    Kelli McCoy (NASA Jet Propulsion Laboratory)

Room E

Virtual Session
Mini-Tutorial 1
  •  1 

    Introduction to Design of Experiments in R: Generating and Evaluating Designs with skpr

    Tyler Morgan-Wall (IDA)

12:20 PM – 1:30 PM

Lunch

1:30 PM – 3:00 PM: Parallel Sessions

Room A

Virtual Session
Session 2A: Statistical Engineering
Session Chair: Peter Parker, NASA
  •  1 

    An Overview of the NASA Quesst Community Test Campaign with the X-59 Aircraft

    Jonathan Rathsam (NASA Langley Research Center)

  •  1 

    Dose-Response Data Considerations for the NASA Quesst Community Test Campaign

    Aaron B. Vaughn (NASA Langley Research Center)

  •  2 

    Infusing Statistical Thinking into the NASA Quesst Community Test Campaign

    Nathan B. Cruze (NASA Langley Research Center)

Room B

Virtual Session
Session 2B: Situation Awareness, Autonomous Systems, and Digital Engineering in . . . T&E
Session Chair: TBD
  •  1 

    Towards Scientific Practices for Situation Awareness Evaluation in Operational Testing

    Miriam Armstrong (IDA)

  •  1 

    T&E Landscape for Advanced Autonomy

    Kathryn Lahman (Johns Hopkins University Applied Physics Laboratory)

  •  1 

    Digital Transformation Enabled by Enterprise Automation

    Nathan Pond (Edaptive Computing, Inc.)

Room C

Students & Fellows Speed Session 1

Please note many of the Speed Sessions will also include a poster during the Poster Session

Session Chair: TBD
  •  1 

    Comparing Normal and Binary D-optimal Design of Experiments by Statistical Power

    Addison Adams (IDA / Colorado State University)

  •  2 

    Uncertain Text Classification for Proliferation Detection

    Andrew Hollis (North Carolina State University)

  •  1 

    A data-driven approach of uncertainty quantification on Reynolds stress based on DNS

    Zheming Gou (University of Southern California)

  •  1 

    I-TREE: a tool for characterizing research using taxonomies

    Aayushi Verma (IDA)

  •  2 

    An Evaluation Of Periodic Developmental Reviews Using Natural Language Processing

    Dominic Rudakevych (United States Military Academy)

  •  2 

    Optimal Release Policy for Covariate Software Reliability Models.

    Ebenezer Yawlui (University of Massachusetts Dartmouth)

  •  2 

    A Stochastic Petri Net Model of Continuous Integration and Continuous Delivery

    Sushovan Bhadra (University of Massachusetts)

  •  2 

    Neural Networks for Quantitative Resilience Prediction

    Karen Alves da Mata (University of Massachusetts Dartmouth)

  •  1 

    A generalized influence maximization problem

    Sumit Kumar Kar (University of North Carolina at Chapel Hill)

Room D

Contributed Session 1
Session Chair: TBD
  •  1 

    Avoiding Pitfalls in AI/ML Packages

    Justin Krometis (Virginia Tech National Security Institute)

  •  3 

    Reinforcement Learning Approaches to the T&E of AI/ML-based Systems Under Test

    Karen O’Brien (Modern Technology Solutions, Inc)

  •  1 

    Under Pressure? Using Unsupervised Machine Learning for Classification May Help

    Nelson Walker (USAF)

Room E

Virtual Session
Mini-Tutorial 2
  •  2 

    A Tour of JMP Reliability Platforms and Bayesian Methods for Reliability Data

    Peng Liu (JMP Statistical Discovery)

3:00 PM – 3:20 PM

Break

3:20 PM – 4:50 PM: Parallel Sessions

Room A

Virtual Session
Session 3A: Methods & Applications of M&S Validation
Session Chair: TBD
  •  1 

    Model verification in a digital engineering environment: an operational test perspective

    Jo Anna Capp (IDA)

  •  2 

    Recommendations for statistical analysis of modeling and simulation environment outputs

    Curtis Miller (IDA)

  •  2 

    Back to the Future: Implementing a Time Machine to Improve and Validate Model Predictions

    Olivia Gozdz, Kyle Remley, and Benjamin Ashwell (IDA)

Room B

Virtual Session
Session 3B: Software Quality
Session Chair: TBD
  •  2 

    Assessing Predictive Capability and Contribution for Binary Classification Models

    Mindy Hotchkiss (Aerojet Rocketdyne)

  •  2 

    STAR: A Cloud-based Innovative Tool for Software Quality Analysis

    Kazu Okumoto (Sakura Software Solutions (3S) LLC)

  •  2 

    Covariate Software Vulnerability Discovery Model to Support Cybersecurity T&E

    Lance Fiondella (University of Massachusetts)

Room C

Students & Fellows Speed Session 2

Please note many of the Speed Sessions will also include a poster during the Poster Session

Session Chair: Denise Edwards, IDA
  •  3 

    A Bayesian Approach for Nonparametric Multivariate Process Monitoring using Universal Resi

    Daniel Timme (Florida State University)

  •  1 

    Implementing Fast Flexible Space Filling Designs In R

    Christopher Dimapasok (IDA / Johns Hopkins University)

  •  3 

    Development of a Wald-Type Statistical Test to Compare Live Test Data and M&S Predictions

    Carrington Metts (IDA)

  •  1 

    Energetic Defect Characterizations

    Naomi Edegbe (United States Military Academy)

  •  2 

    Covariate Resilience Modeling

    Priscila Silva (additional authors: Andrew Bajumpaa, Drew Borden, and Christian Taylor) (University of Massachusetts Dartmouth)

  •  2 

    Application of Software Reliability and Resilience Models to Machine Learning

    Zakaria Faddi (University of Massachusetts Dartmouth)

  •  2 

    Application of Recurrent Neural Network for Software Defect Prediction

    Fatemeh Salboukh (University of Massachusetts Dartmouth)

  •  2 

    Topological Data Analysis’ involvement in Cyber Security

    Anthony Salvatore Cappetta and Elie Alhajjar (United States Military Academy at West Point)

Room D

Contributed Session 2
Session Chair: TBD
  •  3 

    Empirical Calibration for a Linearly Extrapolated Lower Tolerance Bound

    Caleb King (JMP Statistical Discovery)

  •  2 

    Analysis of Surrogate Strategies and Regularization with Application to High-Speed Flows

    Gregory Hunt (William & Mary)

  •  2 

    Case Study on Test Planning and Data Analysis for Comparing Time Series

    Phillip Koshute (Johns Hopkins University Applied Physics Laboratory)

  •  1 

    Model Validation Levels for Model Authority Quantification

    Kyle Provost (STAT COE)

Room E

Virtual Session
Mini-Tutorial 3
Session Chair: TBD
  •  1 

    Data Management for Research, Development, Test, and Evaluation

    Matthew Avery (IDA)

5:00 PM – 7:00 PM: Parallel Sessions

Café

Poster Session and Reception
  •  1 

    Comparison of Magnetic Field Line Tracing Methods

    Dean Thomas (George Mason University)

  •  1 

    Framework for Operational Test Design: An Example Application of Design Thinking

    Miriam Armstrong (IDA)

  •  2 

    The Component Damage Vector Method: A statistically rigorous method for validating AJEM us

    Tom Johnson (IDA)

  •  1 

    Fully Bayesian Data Imputation using Stan Hamiltonian Monte Carlo

    Melissa Hooke (NASA Jet Propulsion Laboratory)

  •  3 

    A Bayesian Optimal Experimental Design for High-dimensional Physics-based Models

    James Oreluk (Sandia National Laboratories)

  •  1 

    Introducing TestScience.org

    Sean Fiorito (IDA / V-Strat, LLC)

  •  1 

    Developing a Domain-Specific NLP Topic Modeling Process for Army Experimental Data

    Anders Grau (United States Military Academy)

  •  2 

    The Application of Semi-Supervised Learning in Image Classification

    Elijah Abraham Dabkowski (United States Military Academy)

  •  1 

    Best Practices for Using Bayesian Reliability Analysis in Developmental Testing

    Paul Fanto (IDA)

  •  2 

    Test and Evaluation Tool for Stealthy Communication

    Olga Chen (U.S. Naval Research Laboratory)

  •  2 

    Comparison of Bayesian and Frequentist Methods for Regression

    James P Theimer (Homeland Security Community of Best Practices)

  •  2 

    Post-hoc UQ of Deep Learning Models Applied to Remote Sensing Image Scene Classification

    Alexei Skurikhin (Los Alamos National Laboratory)

  •  2 

    Multimodal Data Fusion: Enhancing Image Classification with Text

    Jack Perreault (United States Military Academy)

  •  1 

    Predicting Success and Identifying Key Characteristics in Special Forces Selection

    Mark Bobinski (United States Military Academy)

  •  2 

    The Calculus of Mixed Meal Tolerance Test Trajectories

    Skyler Chauff (United States Miltary Academy at West Point)

  •  1 

    Using Multi-Linear Regression to Understand Cloud Properties’ Impact on Solar Radiance

    Grant Parker (United States Military Academy)

  •  2 

    Data Fusion: Using Data Science to Facilitate the Fusion of Multiple Streams of Data

    Madison McGovern (United States Military Academy)

  •  1 

    Assessing Risk with Cadet Candidates and USMA Admissions

    Daniel Lee (US Military Academy West Point)

  •  1 

    Overarching Tracker of DOT&E Actions

    Buck Thome (IDA)

April 27th Agenda

Audience Legend

  •  1 

    Everyone.

    These talks should be accessible to everyone regardless technical background.

  •  2 

    Practitioners.

    These talks might include case studies with some discussion of methods and coding, but largely accessible to a non-technical audience.

  •  3 

    Technical Experts.

    These talks will likely delve into technical details of methods and analytical computations and are primarily aimed at practitioners advancing the state of the art.

7:30 AM – 8:30 AM

Registration / Check-in

8:30 AM – 8:40 AM

Room A+B

Opening Remarks

8:40 AM – 9:20 AM

Room A+B

Keynote 3

Christine Fox

(Johns Hopkins University Applied Physics Laboratory)


Room A+B

Virtual Session
Featured Panel: AI Assurance
Session Chair: Chad Bieber
  • Laura Freeman (Virginia Tech National Security Institute)

  • Alec Banks (Defence Science and Technology Laboratory)

  • Josh Poore (Applied Research Laboratory for Intelligence and Security, University of Maryland)

  • John Stogoski (Software Engineering Institute, Carnegie Mellon University)

10:30 AM – 10:50 AM

Break

10:50 AM – 12:20 PM: Parallel Sessions

Room A

Virtual Session
Session 4A: Transforming T&E to Assess Modern DoD Systems
Session Chair: TBD
  •  1 

    DOT&E Strategic Initiatives, Policy, and Emerging Technologies (SIPET) Mission Brief

    Jeremy Werner (DOT&E)

  •  1 

    T&E as a Continuum

    Orlando Flores (OUSD(R&E))

  •  1 

    NASEM Range Capabilities Study and T&E of Multi-Domain Operations

    Hans Miller (MITRE)

Room B

Virtual Session
Session 4B: Artificial Intelligence Methods & Current Initiatives
Session Chair: Brian Vickers, IDA
  •  1 

    Gaps in DoD National Artificial Intelligence Test and Evaluation Infrastructure Capabilities

    Rachel Haga (IDA)

  •  2 

    Assurance of Responsible AI/ML in the DOD Personnel Space

    John W. Dennis (IDA)

  •  2 

    CDAO Joint AI Test Infrastructure Capability

    David Jin (MITRE)

Room C

Session 4C: Applications of Machine Learning and Uncertainty Quantification
Session Chair: TBD
  •  1 

    Advanced Automated Machine Learning System for Cybersecurity

    Himanshu Dayaram Upadhyay (Florida International University)

  •  2 

    Uncertainty Aware Machine Learning for Accelerators

    Malachi (Thomas Jefferson National Accelerator Facility)

  •  2 

    Well-Calibrated Uncertainty Quantification for Language Models in the Nuclear Domain

    Karl Pazdernik (Pacific Northwest National Laboratory)

Room D

Spotlight on Data Literacy
Session Chair: TBD
  •  1 

    Data Literacy Within the Department of Defense

    Nicholas Clark (United States Military Academy)

Room E

Virtual Session
Mini-Tutorial 4
Session Chair: Gina Sigler, STAT COE
  •  1 

    An Overview of Methods, Tools, and Test Capabilities for T&E of Autonomous Systems

    Leonard Truett and Charlie Middleton (STAT COE)

12:20 PM – 1:30 PM

Lunch

1:30 PM – 3:00 PM: Parallel Sessions

Room A

Virtual Session
Session 5A: Methods and Tools at National Labs
Session Chair: Karl Pazdernik, Pacific Northwest National Laboratory
  •  2 

    Test and Evaluation Methods for Authorship Attribution and Privacy Preservation

    Emily Saldanha (Pacific Northwest National Laboratory)

  •  2 

    Tools for Assessing Machine Learning Models’ Performance in Real-World Settings

    Carianne Martinez (Sandia National Laboratories)

  • Coming Soon

    Pradeep Ramuhalli (Oak Ridge National Laboratory)

Room B

Virtual Session
Session 5B: Applications of Bayesian Statisics
Session Chair: Victoria Sieck, STAT COE
  •  3 

    A Bayesian Decision Theory Framework for Test & Evaluation

    James Ferry (Metron, Inc.)

  •  2 

    Saving hardware, labor, and time using Bayesian adaptive design of experiments

    Daniel Ries (Sandia National Laboratories)

  •  2 

    Uncertainty Quantification of High Heat Microbial Reduction for NASA Planetary Protection

    Michael Anthony DiNicola (Jet Propulsion Laboratory, California Institute of Technology)

Room C

Session 5C: Methods and Tools for T&E
Session Chair: Sam McGregor, AFOTEC
  •  2 

    Confidence Intervals for Derringer and Suich Desirability Function Optimal Points

    Peter A. Calhoun (HQ AFOTEC)

  •  1 

    Skyborg Data Pipeline

    Alexander Malburg (AFOTEC/EX)

  •  2 

    Circular Error Probable and an Example with Multilevel Effects

    Jacob Warren (Marine Corps Operational Test and Evaluation Activity)

Room E

Virtual Session
Mini-Tutorial 5
Session Chair: John Haman, IDA
  •  1 

    The Automaton General-Purpose Data Intelligence Platform

    Jeremy Werner (DOT&E)

3:00 PM – 3:20 PM

Break

3:20 PM – 4:20 PM: Parallel Sessions

Room A

Virtual Session
Session 6A: Tools for Decision-Making and Data Management
Session Chair: Tyler Lesthaeghe, University of Dayton Research Institute
  •  3 

    User-Friendly Decision Tools

    Clifford Bridges (IDA)

  •  1 

    Seamlessly Integrated Materials Labs at AFRL

    Lauren Ferguson (Air Force Research Laboratory)

Room B

Virtual Session
Session 6B: Methods for DoD System Supply Chain and Performance Estimation
Session Chair: TBD
  •  3 

    Applications of Network Methods for Supply Chain Review

    Zed Fashena (IDA)

  •  1 

    Predicting Aircraft Load Capacity Using Regional Climate Data

    Abraham Holland (IDA)

Room C

Session 6C: Statistical Methods for Ranking and Functional Data Types
Session Chair: Elise Roberts, JHU/APL
  •  2 

    An Introduction to Ranking Data and a Case Study of a National Survey of First Responders

    Adam Pintar (National Institute of Standards and Technology)

  •  3 

    Estimating Sparsely and Irregularly Observed Multivariate Functional Data

    Maximillian Chen (Johns Hopkins University Applied Physics Laboratory)

Room E

Virtual Session
Mini-Tutorial 6
Session Chair: TBD
  •  2 

    An Introduction to Uncertainty Quantification for Modeling & Simulation

    James Warner (NASA Langley Research Center)

4:20 PM – 4:40 PM

Room A+B

Awards

4:40 PM – 4:50 PM

Room A+B

Closing Remarks