Intern at the Stanford Intelligent Systems Lab
During the summer of 2015, I was a visiting student researcher at the Stanford Intelligent Systems Lab. I worked under the guidance of Prof. Mykel Kochenderfer, in the Aeronautics and Astronautics department of Stanford.
I worked on the following:
- Vision based Collision Avoidance for Cooperative UAVs
- Visual SLAM and Hexcopter Autonomy for UAVs
Our paper was accepted at AVIATION 2016 -
- Eric Mueller, Tanmay Shankar and Mykel J. Kochenderfer, Cooperative Vision based Collision Avoidance for Unmanned Aircraft.", AIAA Aviation and Aeronautics Forum and Exposition - AVIATION 2016.
Below is a cool video of the Follow-the-leader behavior!
Vision based Collision Avoidance
In order to incorporate small unmanned aerial vehicles into a common airspace along with existing large aircraft, a collision avoidance system for these UAVs is necessary.
Consider two UAVs in a common airspace, the ownship and the intruder. It is necessary for the ownship to execute an aerial maneuver, to avoid a collision with the intruder. Of obvious interest are the relative positions and velocities of the ownship and intruder. Ideally, the maneuver that the ownship executes is one that causes minimum deviation from its desired trajectory, while maintaining the required minimum distance of separation from the intruder. We concatenate these aspects into a state vector.
Consider two UAVs in a common airspace, the ownship and the intruder. It is necessary for the ownship to execute an aerial maneuver, to avoid a collision with the intruder. Of obvious interest are the relative positions and velocities of the ownship and intruder. Ideally, the maneuver that the ownship executes is one that causes minimum deviation from its desired trajectory, while maintaining the required minimum distance of separation from the intruder. We concatenate these aspects into a state vector.
Collision Avoidance Algorithm
The collision avoidance system (CAS) implemented was formulated as a Partially Observable Markov Decision Process. Along with a PhD student in the lab, Eric Mueller, I applied the QMDP approximation to CAS. The reward function incorporated components for the minimum separation distance, deviation from the UAV's original trajectory, and the magnitude of the accelerations involved. The actions that may be taken by the ownship are restricted to the horizontal plane, considering the restricted maneuverability of small UAVs in the vertical direction.
Visual Odometry and SLAM State Estimate
With the core CAS algorithm in place, a state estimate is required for the algorithm to function efficiently. While GPS is ideal for this purpose, small UAVs often fly in GPS denied environments, or areas where GPS signals are highly distorted.
We proposed the use of Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM) to provide an estimate of the position and velocity of the UAV. The novelty of our approach lies in running a SLAM system across both UAV’s that facilitates the co-localization of the ownship and the intruder. We thus permit each UAV to localize itself not only in its own map, but in the map generated by the other as well.
This approach of co-localization within a single map frame reduces the problem to running the same data correspondence on two parallel set of camera feeds, with regard to the same map. It also provides a robust method to solve the problem of unknown initial pose estimates of both UAV’s.
We proposed the use of Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM) to provide an estimate of the position and velocity of the UAV. The novelty of our approach lies in running a SLAM system across both UAV’s that facilitates the co-localization of the ownship and the intruder. We thus permit each UAV to localize itself not only in its own map, but in the map generated by the other as well.
This approach of co-localization within a single map frame reduces the problem to running the same data correspondence on two parallel set of camera feeds, with regard to the same map. It also provides a robust method to solve the problem of unknown initial pose estimates of both UAV’s.
Visual SLAM and Hexcopter Autonomy
In order to execute the visual Collision Avoidance system described above, I implemented two modules of key importance: Visual SLAM, and Hexcopter Autonomy.
Visual SLAM
I implemented Simultaneous Localization and Mapping for the UAVs, via Real Time Appearance Based Mapping. SLAM helps the UAVs create a 3D map of their environment, and retrieve a robust state estimate. The SLAM was set up to operate in a distributed architecture over multiple UAVs, using the networking capabilities of ROS. This facilitates cooperative UAVs to co-localize themselves, obtaining relative state estimates in a common reference frame.
Hexcopter Autonomy
I enabled the Hexcopters to fly autonomously, and execute various behaviors on their own. With the help of the Pixhawk PX4 architecture, the hexcopters could follow any given desired trajectory, in the form of position, velocity, and acceleration setpoints. Some of the behaviors I hence implemented are as follows:
- Follow the leader
- Square navigation