Forum 360: Safety Certification for Machine Learning 6 August 2021 1130 - 1245

Machine learning is going to be necessary for autonomy. Currently, we rely on a human to adapt to new situations, but that won’t be the case for an uncrewed vehicle. As machine learning is impossible to prove 100% correct, it is difficult to certify. Two workarounds have been suggested: wrapping machine learning in safe watchdog systems that will allow it to be guarded in a way to make it safe and safe learning work. Hear experts discuss their research into these approaches and migrating them through the processes necessary for approval and certification.
  • Darren-Cofer Darren Cofer
    Fellow, Collins Aerospace
  • Eric-Johnson Eric N. Johnson
    Professor of Aerospace Engineering, Pennsylvania State University (PSU), and Director, PSU UAS Research Laboratory (PURL)
  • Kevin-Matthies Kevin Matthies
    Senior Vice President and General Manager, Boeing Program, Spirit AeroSystems, Inc.
  • Natasha-Neogi Natasha Neogi
    Subproject Manager, NASA System-Wide Safety Project, and Assurance of Responsible Automation Technical Lead, Advanced Air Mobility Project, NASA Langley Research Center
  • Anthony-Smith Anthony Smith
    Associate Technical Fellow, Flight Controls & Autonomy, Sikorsky, a Lockheed Martin Company