Trusted Autonomous Systems (1-Day)
Kossiakoff Center, Laurel, Maryland
- Early Member & Government Rate: $299
- Standard Member & Government Rate: $399
- Conference Rate: $499
Autonomous systems – enabled by advancement in sensor and control technologies, artificial intelligence, data science, and machine learning – promise to deliver new and exciting applications to a broad range of industries. However, a fundamental
trust in their application and execution must be established in order for them to succeed. People, by and large, do not trust a new entity or system in their environment without some evidence of trustworthiness. To trust an autonomous system, we need
to know which factors affect system behaviors, how those factors can be assessed and effectively applied for a given mission, and the risks assumed by trusting.
This course aims to provide a foundation for evaluating trust in autonomous systems. Elements of autonomous systems are defined, and in that context, the perception of trust is explored. A framework for evaluating trust highlights three perspectives – data, artificial intelligence, and cybersecurity – and includes a dynamic model and measures for trust. The state-of-the art in research, methods, and technologies for achieving trusted autonomous systems is reviewed, with current applications. The course concludes by identifying important open issues and outlining a roadmap for Trusted Autonomous Systems.
Classification Level (ITAR-Restricted)
- Need for autonomous systems and the environments in which they may operate.
- Elements of autonomous systems and challenges for trust – sensor data interpretation, rapid stimulus-response, learning, high-level cognitive models of planning and decision-making.
- Defining the perception of trust for autonomous systems, including hypotheses, evidence for trust, and levels of trust
- Discussion regarding the need for a framework for evaluating trust in autonomous systems – data, algorithms, and cyber considerations
- State-of-the-art and examples in evaluation of trust in autonomous systems – methods for forming and testing hypotheses
[See below for full course outline]
Who Should Attend
This course is intended for decision makers, program managers, chief engineers, systems architects and engineers, analysts, AI scientists, and practitioners from defense-related businesses interested in the application and ramifications of trusted autonomous systems.
Please note that this course is being held at an ITAR level, has a requirement to be U.S. Citizen and an active government contractor. If you are not registered for the AIAA DEFENSE Forum and are interested in attending the course, please visit the ITAR Information page for documentation needed to attend the course.
Please contact Jason Cole if you have any questions about courses and workshops at AIAA forums.
- Autonomous systems – needs, environments, and challenges
- Elements of autonomous systems
- Sensing and interpretation
- Rapid response to stimuli
- Cognitive architectures for planning and decision making
- Monitoring and feedback
- Learning and evaluating performance
- Challenges for trust
- Human perception of trust
- Evidence for trust and levels of trust
- Examples of trust challenges
- Framework for Evaluating Trust in Autonomous Systems
- Data perspective
- Artificial intelligence perspective
- Cyber perspective
- Data & Model Perspective
- Data provenance
- Securing AI models
- Data poisoning
- Artificial Intelligence Perspective
- Adversarial algorithms
- Decision boundary analysis
- Cybersecurity perspective
- Computer security and trust
- Analyst vs Algorithm
- The Future of Trusted Autonomy
- Open Issues for Trusted Autonomy
- Urgent areas for near-term application