Vision-Based Control for Autonomous Vehicles

In This Section


This course will present an in-depth treatment on vision-based control and its application to autonomous vehicles. The maturation of synthetic vision is rapidly advancing the capability for fully-autonomous decision making to maneuver through environments with unknown obstacles. In particular, any cooperative operation will require some level of decision making for individual vehicles as well as the group to account for environmental features. This course will introduce the basics of synthetic vision and build up state-of-the-art developments in vision-based control. Techniques such as scene reconstruction and state estimation are formulated to provide feedback. Control approaches, both nonlinear and robust, are synthesized to utilize the vision-based feedback for decision making. The entire process, including path planning, is thus constructed from a series of subtasks. The authors bring their extensive experience at testing of vision-based controllers, both ground vehicles and air vehicles, through of examples and demonstrations that highlight practical issues of implementation and performance.

Key Topics:

  • Camera Characteristics
  • Scene Reconstruction
  • State Estimation
  • Robust Vision
  • Visual-Servo Control
  • Path Planning

Who Should Attend:

This course is intended for users interested in theoretical understanding of state-of-the-art concepts in vision-based control along with practical issues related to implementation. The material is appropriate for engineers with a basic knowledge of controls who want to investigate the basics of synthetic vision and its integration into autopilot structures.

Course Information:

Type of Course: Instructor-Led Short Course
Course Level: Fundamentals/Intermediate

Course scheduling available in the following formats:

  • Course at Conference
  • Onsite Course
  • Stand-alone/Public Course

Course Length: 2 days
AIAA CEU's available: yes


Course Outline:

I. Introduction

II. Cameras
a. Focal Length
b. Radial Distortion

III. Image Processing
a. Feature Point Tracking
b. Color And Texture

IV. Synthetic Vision
a. Optic Flow
b. Structure From Motion

V. Scene Reconstruction
a. Learning From Regression
b. Multiresolution Analysis
c. Updating Surface Regions

VI. State Estimation
a. Kalman Filter
b. Optic Flow Decomposition
c. Lyapunov Analysis

VII. Robust Vision
a. Camera Uncertainty
b. Calibration Uncertainty

VIII. Visual-Servo Control
a. Homography-Based Approach
b. Line-Of-Sight Progression

IX. Path Planning
a. Multi-Rate Feedback
b. Receding Horizon

X. Experimental Validation


Course Materials:

Since course notes will not be distributed onsite, AIAA and your course instructor are highly recommending that you bring your computer with the course notes already downloaded to the course.

Once you have registered for the course, these course notes are available about two weeks prior to the course event, and are available to you in perpetuity.


Course Instructors:

Rick Lind is an assistant professor in the Department of Mechanical and Aerospace Engineering at the University of Florida. He has flight tested many vision-based autopilots for UAVs and MAVs. He is the principle investigator for a MURI on vision-based control for aircraft operations in urban environments.

Andy Kurdila is the W. Martin Johnson Professor of Mechanical Engineering at Virginia Tech. He is the principle investigator for the multi-million JOUSTER program for development of autonomous ground/air vehicles. Additionally, he has over 150 publications and 4 books on control theory and dynamics.

Nick Gans is an Assistant Professor of Electrical Engineering at the University of Texas at Dallas. He has researched control of UAVs with the Air Force Research Laboratory. His research interests include nonlinear and adaptive control, with focus on vision-based control and estimation, robotics and autonomous vehicles.