MAVSDK Webinar Recap

Nov 27, 2018

MAVSDK Webinar Recap

On November 8th, Auterion hosted its first webinar on how to develop mobile applications with the MAVSDK. This SDK was co-developed by member companies of the Dronecode industry association. Lorenz Meier, Auterion Co-Founder and creator of PX4, introduced the presentation explaining the roadmap of the open source project. Later on, Jonas Vautherin, software engineer at Auterion, and Julian Oes, software engineer at Auterion and formerly employed by Yuneec Research, took the lead presenting the MAVSDK and showcasing how this works with different programming languages.
Sounds like something you would have loved to learn about but you missed out? Don’t worry, you can view the full presentation and read more details about the MAVSDK below.

The Goal

Every single use case involving a drone requires a ground station or a connection to the cloud: one needs to plan a mission, check that the drone is ready to fly, and control the mission during the flight. Many drone manufacturers provide an SDK to control their products, and our open source community is no different in this regard.
We already have tools for that: the MAVLink protocol is the de facto standard for open source autopilots, and there is a rich ecosystem of existing language bindings. How is the MAVSDK different? The main reason is the design philosophy that we would like to summarize in a few points:

  • We want to provide a high-level, user-friendly API to developers (MAVLink is too low-level for many use cases, even with language bindings).
  • We want to be cross-platform and support multiple programming languages in a consistent way.
  • We want to be scalable. Current solutions, such as DroneKit, are difficult to maintain because each language has its own implementation.
  • The performance and scalability need to allow the usage in swarm scenarios, which calls for a highly efficient backend.
  • We want the SDK to be extensible for specific use cases and features.

The last point deserves more attention. Due to the open source nature of our community, we are neither targeting a specific drone nor a certain manufacturer. In fact, we aim for having different actors, e.g. service providers and drone manufacturers, to share and collaborate on a common API without preventing them from diversifying. All drones provide telemetry and basic mission features, but let’s say one drone carries a very specific sensor. A drone manufacturer should be able to add unilateral support for this sensor without even open sourcing it.

MAVSDK architecture

Our design addresses our philosophy using a bottom-up approach: we share a common MAVLink core (in C++) among all the different language implementations. This core is following the common MAVLink feature set. Features are divided into so-called “plugins” (you will find those plugins at all the abstraction levels). This is what we call the “C++ library” (see Fig. 1).

Fig. 1: MAVSDK architecture

All the various language wrappers are built on top of this foundation, meaning that we can expect the same exact behaviour between all of them (i.e. they actually share the MAVLink implementation). Because we want our API to be consistent across languages, and for maintainability reasons, we decided to auto-generate everything that goes on top of the C++ library (the “C++ Backend” and the different “frontends”, Python being the example given in Fig. 1). The auto-generation is not completely finished yet, but work in progress and will be coming very soon.
Finally, we went for a messaging architecture between our frontends and backend. It means that the link between our language wrappers (Swift, Python, Java, …) and the C++ core is done through messaging. This brings us a few interesting possibilities such as easily adding new plugins/sensors to the system, and it is also completely abstracting MAVLink away. As a result, one could imagine writing a new backend for non-MAVLink drones but sharing the very same API.

Development environment

Developing on a drone may be a bit tricky and cumbersome, and we, therefore, offer simulators for that. There are essentially two setups: SITL and HITL (both nicely documented in the PX4 manual).
SITL (software in the loop, Fig. 2) is a simulator that is running exclusively on your computer. The autopilot runs just as if it was actually flying (but on your computer instead of the drone), and receives fake sensor information from the simulator (be it Gazebo, jMAVSim or something else). Again, detailed documentation can be found in the manual, but it is a matter of compiling the PX4 and running a command such as “make posix jmavsim” (more details in the getting started guide). For running gazebo headless (i.e. without a graphical rendering), Jonas is maintaining an unofficial docker image here.

Fig. 2: Software in the loop setup

HITL (hardware in the loop, Fig. 3) is one step closer to the real-life scenario: this time, the autopilot is actually running on the drone, but still receives fake sensor information from the simulator. This obviously requires a drone (as opposed to SITL), but it allows testing the sensors for real (e.g. a camera).

Fig. 3: Hardware in the loop setup

Questions from the webinar:

Does MAVSDK support artificial intelligence?
Sure! The SDK is really bringing an interface for communicating with the components in the drone. You could definitely plug your artificial intelligence algorithms in the system in order to follow somebody based on some deep learning on the camera feed, for instance.
How is the live video received from the camera in iOS?
Right now, we provide a link to the camera feed. Taking the Yuneec H520 as an example, you would receive an URL to an RTSP stream. In the Swift example app, we use MobileVLCKit, but one could use any other library for that.
What would be the advantage of running the backend SDK on the companion computer?
Say the companion computer has a USB connection to a custom sensor. If you run the backend on the companion computer, you can integrate this sensor to the MAVSDK without requiring any mavlink. Another idea for a drone manufacturer would be to keep control over the mavlink logic running in the backend (e.g. to make sure that the backend is always up-to-date). Or you may want to expose some of the SDK functionality to a ground station, without exposing the mavlink interface.
What about running the MAVSDK frontend on the companion?
There is no issue with that; all the frontend needs is a network connection to the backend. Having both the frontend and the backend running on a smartphone, or both running in the companion is exactly the same for them!
You demonstrated the filtering of telemetry messages to display them only when a value changed. Does it save bandwidth? Where is the filtering performed?
The example in the webinar was purely done on the Swift side, so it doesn’t save “MAVLink bandwidth”. It is mainly for convenience on the frontend side, but it could also improve performance (say you made relatively heavy computations with the relative altitude events you receive from the example, you would not want to receive too many of them).
How many drones can the MAVSDK handle concurrently?
There is not a hard limit, and it is possible to set up a system with more than one drone. Say you wanted to survey an area with 5 drones flying at the same time, you could do it with the MAVSDK. However, it is not optimized for swarm applications, so some work would likely be needed for a scenario with hundreds or thousands of drones. In other words: “a few drones” are definitely fine, “a swarm” is something that needs more thoughts!

Where to get started

Below are some links to the documentation, the Slack channel, and the community forum. You are welcome to address questions or to simply get in touch!

How to contribute

The best way to start contributing is to first get in touch with us (e.g. on Slack) so that we can align on what needs to be done. You can also create an issue on the Github repo to open a discussion about something you would potentially work on. Note that we label some issues as “beginner” to show that they would be a good way to start contributing to the project.
Everybody is welcome to contribute:

  • As a user, report bugs when you encounter them (is possible to explain how to reproduce).
  • Still, as a user, you can help improve our documentation if something is unclear to you.
  • We are running tests on CI systems (Jenkins, Travis, Appveyor), using Docker. Let us know if you can help there!
  • The C++ libraries and backend are written in C++, using CMake.
  • We currently have frontends in Swift and Python. If you have experience in Carthage, Cocoapods, SwiftPM, or packaging for pythons, we would be super happy to get some help.

You May Also Like…

Get in touch with us

Get in touch for a demo, quote or any other question about the Auterion platform