Trusted Autonomous Systems (ITAR-Restricted Course)

In This Section

Autonomous systems – enabled by advancement in sensor and control technologies, artificial intelligence, data science, and machine learning – promise to deliver new and exciting applications to a broad range of industries.  However, a fundamental trust in their application and execution must be established in order for them to succeed. People, by and large, do not trust a new entity or system in their environment without some evidence of trustworthiness. To trust an autonomous system, we need to know which factors affect system behaviors, how those factors can be assessed and effectively applied for a given mission, and the risks assumed by trusting.

This course aims to provide a foundation for evaluating trust in autonomous systems.  Elements of autonomous systems are defined, and in that context, the perception of trust is explored.  A framework for evaluating trust highlights three perspectives – data, artificial intelligence, and cybersecurity – and includes a dynamic model and measures for trust.  The state-of-the art in research, methods, and technologies for achieving trusted autonomous systems is reviewed, with current applications.  The course concludes by identifying important open issues and outlining a roadmap for Trusted Autonomous Systems.

Learning Objectives

1. Need for autonomous systems and the environments in which they may operate.
2. Elements of autonomous systems and challenges for trust – sensor data interpretation, rapid stimulus-response, learning, high-level cognitive models of planning and decision-making.
3. Defining the perception of trust for autonomous systems, including hypotheses, evidence for trust, and levels of trust
4. Discussion regarding the need for a framework for evaluating trust in autonomous systems – data, algorithms, and cyber considerations
5. State-of-the-art and examples in evaluation of trust in autonomous systems – methods for forming and testing hypotheses
[See below for full course outline]

Who Should Attend
This course is intended for decision makers, program managers, chief engineers, systems architects and engineers, analysts, AI scientists and practitioners from defense-related businesses interested in the application and ramifications of trusted autonomous systems.

Contact
Please contact Jason Cole if you have any questions about courses and workshops at AIAA forums.

Outline

Outline

  • Autonomous systems – needs, environments, and challenges
  • Elements of autonomous systems
    • Sensing and interpretation
    • Rapid response to stimuli
    • Cognitive architectures for planning and decision making
    • Monitoring and feedback
    • Learning and evaluating performance
  • Challenges for trust
    • Human perception of trust
    • Evidence for trust and levels of trust
    • Examples of trust challenges
  • Framework for Evaluating Trust in Autonomous Systems
    • Data perspective
    • Artificial intelligence perspective
    • Cyber perspective
  • Data & Model Perspective
    • Data provenance
    • Securing AI models
    • Data poisoning
  • Artificial Intelligence Perspective
    • Adversarial algorithms
    • Decision boundary analysis
  • Cybersecurity perspective
    • Computer security and trust
    • Analyst vs Algorithm
  • The Future of Trusted Autonomy
    • Open Issues for Trusted Autonomy
    • Urgent areas for near-term application
Instructors

Andrew Brethorst is the Associate Department Director for the Data Science and AI Department at The Aerospace Corporation. Mr. Brethorst completed his undergraduate degree in cybernetics from UCLA, and later completed his master’s degree in computer science with a concentration in machine learning from UCI. Much of his work involves applying machine learning techniques to image exploitation, telemetry anomaly detection, intelligent autonomous agents using reinforcement learning, as well as collaborative projects within the research labs.