This blog is written and maintained by Morten Daugaard(firstname.lastname@example.org) and Thomas
Vrang Thyregod(email@example.com) as a part of our masters thesis project. The posts
will both serve as a guide for people interested in developing software for
the AR.Drone and as documentation of our work with the drone.
The main objective of our project is to examine and implement
different algorithms(mainly within the computer vision and indoor
navigation fields) that will give our AR.Drone a bit more
autonomy in regards to manuevering in a slightly dynamic environment.
The AR.Drone, which we have been granted access to, is often regarded
as nothing more than an expensive toy, but with an onboard ARM
processor and a range of sensors we find that the system is well
suited to our needs as an aerial robotic development platform.
Out of the box the AR.Drone is equipped with two cameras(one front and
one down facing), a three axis accellorometer and a ultrasonic sensor
for measuring altitude. After
support, we added a second
adapter to work as a wifi sensor. We are at this point able to
add just about any sensor we wish, as long as it has a USB interface,
is linux compatible(driver wise) and doesn't weigh too much. Much work
has gone into configuring the drone linux kernel, patching and adding
kernel modules and finding correct wifi drivers beeing the most time
We have written a couple of tools to aid us in our analysis work,
both on-drone and off-drone. Our main off-drone tool is a python
application which greatly extends the functionality found in the
official AR.Drone iDevice app. Besides being able to show the drone
video feed, the tool is also able to display the nav data and wifi
data stream and to record specific samples and even compare these
samples to a target sample. We are able to record flights(video, nav and wifi
packets) to disc and following replay the entire flight.
A key component of the tool is our integrated testdevice. After
realising that we would not always be carrying the drone with us, we
decided to build a simple server that on request could send out
video,nav and wifi data in a way that is totally transparent to the
rest of the program, thus making us able to test our code on realistic
For manual control our program enables us to use a wireless
X-box 360 controller, our controller architecture also makes it very easy
to add automatic control of the drone, as this is nothing more than
subclassing our base controller class and adding control code.
To decide which features to develop for our drone, we have considered
situations from environments we consider to be only slightly
dynamic. One of our main cases has been a museum, thus prompting
ourselves to ask questions on what features would be necessary for
navigating such an environment. The features we identified for this
environment basically fell into two categories, safety and
navigation. How do we avoid crashing into people? can we detect
silhouttes? can we detect them fast enough to avoid collision? How
do we navigate in this environment? can we move along a corridor? can
we recognize and go through a door? what if the door is closed? We
doubt we will be able to build a fully autonoumus system which does all of these
things at once, but we hope to be able to demonstrate the features
individually and also to determine the level of information we need to
feed the drone. The blog will reflect on our progress towards
an implementation of these features.
All code for this project is released from our Repository under a
permissive MIT license.