Autonomous driving: Part 2: Project Lucifer
Updated: Dec 1, 2021
As discussed in the previous part, the second iteration of the project still had a fair number of flaws. I wanted to make sure this version had none of them and that at least the hardware on this version would be the “long term support” kind and not something I would have to update every year. This project gave me valuable insights into real-world engineering and was thus named Lucifer (Latin for "the bringer of light/ enlightenment"). Other silly reasons for calling it Lucifer are:
The Ground control system in this version can support voice commands, allowing me to send the car into autonomous mode by saying "lucifer take the wheel".
The sporty/race mode is called luciferous because Tesla has already trademarked Ludicrous mode.
Version 3.0: Lucifer (2019-20xx)
In order to have long term support, the first thing I had to do was switch over from the Atmega328P to the STM32 “blue pill”. The blue pill microcontroller offered significantly more processing speed and memory as well as better support for peripherals (multiple UART ports for example). Once the code for the hardware abstraction was ready, I started designing the harness for the autopilot. In the previous iterations, the autopilot was designed around the car, which always led to some sort of a compromise (wire length and sensor placement, magnetic interference). For this iteration, I decided to redesign the car around the autopilot system. The optical flow sensor was the centerpiece of the autopilot. I needed the sensor to sit in the middle of the car, which meant redesigning the chassis. The image on the left compares the standard chassis parts that come with the HPI Sprint 2 (top) with the fabricated chassis parts (bottom). The image on the right shows the 3D view of how those components would be put together, the light purple component being the autopilot box/harness.
The magnetometer-accel-rate-gyro (MARG), optical flow sensor and the GPS were all soldered directly onto the dot board instead of being connected by jumper wires. The length of the sensor wires was also significantly shorter (5 centimeters (or 2 inches) max).
The motor in the HPI sprint 2 is mounted such that the magnetic field of the rotor passes through the MARG sensor, thus the a soft iron casing was placed on the motor to short the leaking magnetic flux. This reduced the low-frequency magnetic interference by 75%.
The Ublox LEA-6H GPS was ditched for a Ublox M8N GPS
The illumination intensity was increased four-fold (400 W/m.sq)
Separate failsafe controller (Atmega328P)
Xbee Pro module for telemetry with ground control
Separate power module for the control board (5V/5A switching BEC)
Communication support for companion computer over UART
The drivetrain was converted from four-wheel-drive to rear-wheel-drive/braking as the belt for the front axle would have to go through the microcontroller. This made the control a bit more challenging.
On the software end, the state estimator was improved significantly.
The AHRS now used a Kalman filter with tilt compensation for the compass heading and bias estimation for the gyroscope.
The position estimator now considered the delay in the GPS packets and also compensated for the position offset of the sensors from the rear axle.
The highlight of this build's state estimator was the use of virtual /derived sensors to aid in positioning when both the GPS and the optical flow were unreliable.
Use of Model-aware non-linear speed controller.
In addition to direct curvature control, yaw rate correction now used exponential control instead of proportional control.
Support for preemptive braking in the control system (model-aware; requires a model of the vehicle and is currently designed for rear-wheel drive and rear-wheel braking setup. This is still a WIP).
Ground control system and telemetry for debugging as well as ease of operation (sending waypoints on the fly, changing modes, engaging failsafe measures).
Integration of the Jevois A33 smart camera(currently only records video data).
Offline trajectory optimization
In simple terms, it is possible to measure the speed (only in the body frame longitudinal direction) of the car when it turns by combining data from the accelerometer and the gyroscope as the centripetal acceleration is the product of the absolute speed and the yaw rate of the car.
speed = lateral_Acceleration/yaw_Rate
As there exists no extra physical sensor for this measurement, it may be considered as a "virtual" or "derived" sensor. The graph on the left shows a position plot of the car when the car was moving in circles, obtained only by integrating the speed measurements. The plot on the right shows the speed estimate across time. The variations in position, as well as speed, are present because the motor controller does not directly control the speed and sometimes creates oscillations on its own under load.
The other virtual sensor in this build is a mathematical model of the propulsion system which predicts the speed of the car at any instant of time based on the throttle input. As the model parameters may not be accurately known, the parameters are adjusted on the fly by using the state estimate (which was quite oscillatory) from the other sensors as feedback. The plot below shows the system starting from completely bogus parameters and correcting them over time by using the state estimator output:
You can find more details about this approach in my paper on this state estimator. This video shows the localization accuracy in bad GPS conditions.
Another advantage of the mathematical model for the propulsion system is that the speed controller can use the inverse of the model to find an approximate baseline value for the throttle command and use a PI controller to manage the smaller perturbations. Apart from this, a yaw correction system for stability was added. This system corrects the steering angle in proportion to the yaw-rate error. Thus, if the car oversteers, the system applies counter steer to catch the slide. The previous iteration also had this system, however it used a proportional system which has the following drawbacks:
If the gain for corrections is high enough to catch snap oversteer, it will also be extremely twitchy when going straight if the surface is uneven (the road that appears flat to you appears quite bumpy to a 1/10th scale car) and this twitchy behavior itself may cause the car to spin out.
If the gain is lowered to prevent twitchiness at high speed, the gain may be insufficient to catch snap oversteer.
I have faced a similar dilemma before in my quadcopter build. For problems like these, I use an exponential or variable gain P-controller, in which the gain increases in accordance with either some external factor or in accordance with the error itself. This allows me to use a smaller gain for straight-line stability while still having a system that can respond to snap oversteer. This is what the system looks like in action:
Preemptive braking requires knowing when to start applying brakes for an upcoming turn, which requires knowing what the required speed is and at what point the car needs to hit that speed. Thus, it needs the location of the curvature maxima.
Usually computing the curvature maxima(s) would be a linear search process, thus if the required accuracy is 0.1 %, the worst-case scenario is 1000 steps, with the average case being 600 steps. With each step taking 15 microseconds to execute, this would not be feasible. Thus, I came up with geometry-based approximations for where the maxima(s) would occur, which would act as the initial guess for a Newton-Rhapson optimizer that would find the maxima within 0.1% accuracy in 2-3 steps. This reduced the computational requirements significantly and made a real-time curvature maxima calculation possible.
Here is a link for a video demonstration of preemptive braking (you can skip to 1:03 mark. This feature is still a WIP). The purpose of this feature is to act as a velocity profile generator, except that instead of generating a velocity profile iteratively, I am trying to find a closed-form solution for the braking point dynamically(on the fly). The reason why I want to do this dynamically is that in racing, the car's trajectory won't remain the same as the one that was planned offline. The offline trajectory is merely a suggestion, as there will be opponent agents that block the trajectory you want to take. Therefore the system must create new trajectories, and thus the system needs to be capable of finding the new braking point on the fly.
Offline trajectory optimization:
The offline trajectory optimization takes in ordered waypoint regions, i.e., (X,Y, radius) as input arguments and finds the optimal (X, Y, heading) coordinates for the car to pass through such that the resulting trajectory would be optimal (what is considered "optimal" can be decided by the user). The cost function is comprised of a family of weighted potential fields (or sub-cost functions). These sub-cost functions are:
Total (absolute) curvature
The derivative of curvature
The second derivative of curvature
The overall length of the trajectory
C2 continuity between two consecutive sections
The sum of section-times (will be replaced with overall lap-time soon).
The weights for these sub-functions can be changed by the user. They are pre-multiplied by a normalizing factor so the weights can be adjusted relative to each other (a weight vector of [2,2,2,2] will have the same effect as a weight vector of [0.1,0.1,0.1,0.1]).
The optimizer takes advantage of the fast maximum curvature finding approach discussed in the previous section, resulting in much faster optimization. The other components for the optimizer are not yet optimized, so the full track solution takes about 7 seconds on my i7 2.8 GHz processor. I think that a speed-up is possible by using an MPPI like approach, where the GPU could be used to compute costs for multiple scenarios and we could just take the weighted average.
This particular version doesn’t have “flaws” because it is a long term support version, thus more features will be added over time. The features planned for the future are:
Computer vision/LIDAR based obstacle avoidance.
Real-time trajectory optimization around obstacles (1,2 are related)
This project is still a work in progress and so this story will be updated as and when new features are integrated into the system!