We present a vision based control strategy for tracking and following objects using an Unmanned Aerial Vehicle. We have developed an image based visual servoing method that uses only a forward looking camera for tracking and following objects from a multi-rotor UAV, without any dependence on GPS systems. Our proposed method tracks a user specified object continuously while maintaining a fixed distance from the object and also simultaneously keeping it in the center of the image plane. The algorithm is validated using a Parrot AR Drone 2.0 in outdoor conditions while tracking and following people, occlusions and also fast moving objects; showing the robustness of the proposed systems against perturbations and illumination changes. Our experiments show that the system is able to track a great variety of objects present in suburban areas, among others: people, windows, AC machines, cars and plants.
The code of this project has been made open-source under BSD license and it is available in GitHub in the following site: https://github.com/Vision4UAV/cvg_ardrone2_ibvs
The motivation of this work is to show that Visual Object Tracking can be a reliable source of information for Unmanned Air Vehicles (UAV) to perform visually guided tasks on GPS-denied unstructured outdoors environments. Navigating populated areas is more challenging to a flying robot than to a ground robot because it requires to stabilize itself at all moments; in addition to the other usual robotics operations. This provides a second objective to the presented work to show that Visual Servoing, or positioning a VTOL UAV relative to an object at an approximate fixed distance, is possible for a great variety of objects. The capability of autonomous tracking and following of arbitrary objects is interesting by itself; because it can be directly applied to visual inspection among other civilian tasks.
The work shown on this website is currently being submitted for peer review in various conferences. We will add more information as we get feedback from our submissions.
- Videos on the 3.5 section include decoupling heuristics on the controller
- Videos 3.1 through 3.4, tests performed from 28 June 2013 to 12 July 2013
- All videos are recorded in real-time (the logged frames were shyncronized using our logs), so that poor or lost WiFi connections are visible, and have occurred when the video freezes.
- Sometimes the on-board videos show incorrectly f_yr, the vertical image feature reference. In those cases the videos have been watermarked showing the correct control error in the lower right of the video.
- The videos are long because they show complete tests. This way the viewer can judge the performance of the system based on these experiments.
3.5 Tests on person following with decoupling heuristics on the controller
3.4 Tests on person following where our system was tested against target occlusion
3.3 Tests on car and person following
3.2 Tests on a suburban area selecting arbitrary objects/targets from the street
3.1 Tests on target that matches controller’s tunning expected size and distance
The PhD Students and Researchers that have actively worked on this project are:
- Msc. Jesús Pestana Puerta (PhD. Candidate at CVG, CAR, CSIC-UPM).
- Msc. José Luis Sanchez-Lopez (PhD. Candidate at CVG, CAR, CSIC-UPM).
- Professor Dr. Srikanth Saripalli (ASTRIL, SESE, ASU).
- Professor Dr. Pascual Campoy (CVG, CAR, CSIC-UPM).
5. Other collaborators