[Hello World!] - Simulation
In this continued series of fortnightly technical blogs, StreetDrone is going to lift the LiD(-AR) on one aspect of autonomous driving software development. Our “Hello World!” series is written by software engineer Efimia Panagiotaki, a tech-trotter with engineering experience ranging from the National Technical University of Athens to ETH Zurich to Williams Formula 1. Efimia is leading the development of StreetDrone’s autonomous driving software by leveraging Autoware, a ROS-based, open-source self-driving stack.
The general rule of thumb is: If it doesn’t work on the simulation, don’t bother trying it on the car.
There used to be the impression that to evaluate and test a self driving vehicle, you need to put wheels on the ground and cover as many real-world road miles as possible. This approach does indeed work on human driven vehicles where handling depends on the capability of the driver to assess and react on an infinite number of road scenarios. But on self driving cars, the decision pipeline (sense-plan-act) depends on how well your algorithms are developed, tested and evaluated for different road cases and in various environments.
Real world road testing doesn’t provide complete and realistic evaluation of the autonomous systems as it’s simply impossible to cover every possible edge case when testing the vehicle on the road; pedestrians behavior, weather, traffic, unexpected driving scenes. As we are moving towards Level 4 and Level 5 Autonomy, an accurate and powerful simulation is necessary to capture complex road scenarios.
Similarly to autonomous systems design, though, simulation software is still in the process of improvement. There are many tools out there that are good when targeting Level 2 and Level 3 autonomy and even more tools for research and educational projects. Nevertheless, when designing the software for an autonomous vehicle it is necessary to fit a simulation to the pipeline. Else, you end up wasting valuable engineering time, which we all know is not cheap.
One of the simulations we are using for our Open Source projects is Gazebo Simulation. With this tool we are aiming for:
- Rapid algorithmic testing in various scenarios and worlds
- Robot design on a powerful physics engine
- Regression testing
- Sensors testing, fault detection and handling
Plus, it works great with ROS and has a great community of developers supporting it.
On the other hand, though, this simulation isn’t great when evaluating computer vision algorithms and can be very computationally heavy when testing advanced road settings with complex elements.
The StreetDrone (SD) Twizy Simulation Model is available on GitHub for ROS Kinetic (Ubuntu 16.04) and ROS Melodic (Ubuntu 18.04). The model was developed in collaboration with Dr. Sander Van Dijk, Head of Research in Parkopedia, aiming to achieve maximum correlation to the actual vehicle. On a more technical level, each of us will explain in more detail the area of our development for this simulation.
TIP: The SD-TwizyModel for ROS Kinetic (branch:master) can also now be controlled by the latest SD-VehicleInterface which is responsible for the communication between the actual StreetDrone Twizy CAN Bus and ROS Kinetic. The SD-VehicleInterface receives as input a /twist_cmd topic [geometry_msgs/TwistStamped] and converts it in SD Control command message.
For robot description, ROS contains a parser for files in Unified Robot Description Format (URDF). URDF is based on XML file format to describe all elements of a robot, in tree structure representation. For use in Gazebo some additional simulation-specific tags need to be included in the URDF. Internally, Gazebo automatically converts the URDF to SDF which is the final file description format that the simulator is using.
* Simulation Description Format (SDF) is a complete description XML format for objects and environments. It is a stable, robust and extensible format capable of describing all aspects of robots, static and dynamic objects, lighting, terrain and physics. [source]
To build a complete robot representation in URDF for Gazebo, we need:
Kinematic and dynamic robot description
Visual representation (3D graphic file)
When developing the SD Twizy vehicle model, we are designing the robot with rigid links (element <link>) connected by joints (element <joint>). Following our software’s TF tree for maintaining the relationship between coordinate frames in tree structure, the main link for controlling the SD-VehicleModel is the base link (center of rear axis). As the SD Twizy is a front wheel drive (FWD) vehicle, the steering joints connect the base link with the front axle which controls the front wheels. The rear wheels are directly connected to the base link through the rear wheel joints.
The chassis joint connects the base link with the chassis link, in which all the sensors are mounted. The sensors in URDF file format are described by the <sensor> element which describes the basic properties of a perceptual sensor.
Tuning is necessary to accurately represent the vehicle’s physics in Gazebo, by exploiting the various properties available in Gazebo, under the <gazebo> element for links and joints.
Robot Controls - Dr. Sander Van Dijk
With the robot description now in place, a simulated version of the vehicle can be spawned into Gazebo. However, at this point it doesn't really do anything. Some sort of control interface is needed to actually be able to control it from a ROS node and make it move. There are two main ways to achieve this:
Use ros_control, a well known set of ROS packages that offers a wide range of abstractions and tools to set up the control of hardware in a standard way. Gazebo comes with plugins that implement the interfaces that ros_control expects.
Create a Gazebo model plugin. Similar to the ros_control plugins, you can create and load your own custom plugins for Gazebo, that give you direct access to all inner workings of Gazebo.
The first iteration of the control interface for the simulated SD Twizy used the first option, which provides a familiar, standard interface for ROS users. However, exactly because ros_control abstracts away a lot of things for you, you have less freedom in tuning the physical and control models. For instance, you can control joints (motors) based on target angles, angular velocities or torque, but it is not directly possible to apply a velocity dependent torque needed to accurately simulate a car's brake system. That is why the second iteration was based on a custom Gazebo plugin.
The Open Source Robotics Foundation's Prius based car demo provided the major inspiration for this plugin. Some of the main adjustments compared to that demo are to mirror the real SD Twizy and its physical constants and sizes as well as its control parameters and units. Some of the plugin's features, partly made possible by integrating directly with Gazebo, are:
Ackermann steering to control the front wheels based on the desired steering wheel angle.
Air friction and simulated regenerative braking friction which slow down the vehicle when no control is provided.
A ground-truth odometry topic, to test localisation methods against, or to test higher level logic without having to worry about localisation error.
The plugin uses a custom topic and message format, sd_control_msgs.Control, so that the simulated vehicle can be controlled directly with the same low-level attributes as a real vehicle: a brake/throttle strength between -100 and 100, and a steering amount between -100 and 100. Because ROS makes it very easy to transform topics or messages it is good practice to start with such a specific interface and then build abstractions around it to let a ROS user do his work without having to worry about the concrete vehicle being used. The Sd-TwizyModel comes with some extra bits to help with that:
A joystick tele-operation node that translates commands from the standard ROS joy node to SD control commands. (ROS Melodic and ROS Kinetic)
A keyboard tele-operation node that translates the keyboard input to SD control commands (ROS Kinetic)
A node to bridge between the Autoware.AI framework and the plugin. (ROS Melodic)
Being able to easily interface with Autoware.AI gives direct access to its comprehensive stack of autonomous driving technology, from sensing and localisation/SLAM packages to planning and control nodes, including Pure Pursuit and Model Predictive Control.
These control nodes can output targets in different formats, including as 'twist' (combined target linear and angular velocities) or as 'control commands' (combined target linear velocity, possibly with desired acceleration, and steering angle). For the second iteration it was chosen to use the control command format, since that better matches the control dimensions of a vehicle, in contrast to the twist format that allows to ask for zero linear velocity with non-zero angular velocity, which a car is not able to achieve (even though that could be super useful for parking...). Still, the linear velocity needs to be mapped to the final throttle percentage, which the bridge achieves with a standard PID controller.
The last step of getting the full control system to work properly was to tune the physics parameters to prevent instability and improve accuracy. A common issue one can run into with Gazebo is that the default physics engine it uses, ODE, does not preserve energy. Especially for robots that form a circular control chain, which is basically any robot with multiple support points with joints between these points (like legged robots, or in this case vehicles with multiple wheels touching the ground), contact constraints can build up runaway counter forces that make the model fly off or explode. With the SD Twizy for instance this at one point made it behave like a pimped up lowrider with overly active hydraulic suspension! A solution that usually works is to allow the support points to slightly sink into the ground, using Gazebo's <minDepth> attribute. This prevents small inaccuracies in collision handling from causing the build up of 'error reduction' forces.
If you'd like to get in touch with StreetDrone, please do get in touch with us at: email@example.com