Search
• sidharthtalia

# Unified state control for planes using quasi optimal trajectories

Updated: Sep 26

### Preface:

In 2017, when I was working on the second iteration of my mini-self-driving car, I needed a lightweight local controller whose performance would look like that of an MPC without actually using the computation needed for an MPC. I came up with what I call the bezier curve control system (see the "My Approach section"). I’ve applied this system to many different kinds of problems, but they were always two-dimensional problems. As an aeromodelling enthusiast, I always felt that the existing control system used by Ardupilot/px4 that uses a separate controller for altitude and horizontal position could be improved. I wanted to create a unified controller that would control the pose of an airplane in 3D by extending the bezier control system to work in all 3 dimensions. This first iteration of this blog only contains a rough overview of the system, without the comparisons with ardupilot. This is just to “get my foot in the door” or the blog equivalent of publishing a paper on archive(ArXiv) to lay claim to an idea.

### Introduction:

With the advent of aerial delivery, mapping, survey, and so on, research in the field of autonomous aircraft control has become increasingly popular. The position control problem for fixed-wing aircraft is conventionally solved by splitting it into 2 separate problems; horizontal position control and vertical position control. This makes sense for most applications as the changes in vertical position (altitude) occur much more slowly compared to the changes in the horizontal position. However, under circumstances where all 3 dimensions change rapidly, such as in a vertical spiral, the two systems can fight against each other. To deal with this issue, we need unified control of all 3 dimensions.

### Prior art:

Note: I will be posting videos of the ardupilot fight stack following a similar trajectory with the same waypoints, I just haven’t got the time to do it right now. I know from experience that Ardupilot does have problems controlling XY and Z positions together when spiraling downwards (upward spirals are easier as the roll angle is farther from 90 degrees).

Ardupilot/px4 is an open-source autopilot that has been used in a variety of autonomous unmanned aerial systems. Ardupilot/px4 is an obvious benchmark because its control system is proven to work in the real world. The position control system for fixed-wing planes used in ardupilot has two parts:

1. TECS: The TECS (Total Energy Conservation System) controls the exchange between the vehicle’s kinetic energy (i.e. speed) and its potential energy (i.e. altitude). Its inputs are a target speed and height and it attempts to reach these targets by calculating target throttle and pitch values which are then passed to the lower level pitch and throttle controllers. You can read about ardupilot’s implementation here.

2. L1 controller: The L1 controller converts an origin and destination (each expressed as latitude, longitude) into a lateral acceleration to make the vehicle travel horizontally along the path from the origin to the destination.. Ardupilot uses a slightly modified version of this. You can read more about it here.

These systems then provide roll, pitch, yaw rate, and throttle commands to the angle-control and speed control systems. However, as stated in the introduction, vertical spiral-like situations where the bank angle gets close to 90 degrees (or more if you’re traveling fast) can be problematic for this kind of a controller, as (in ardupilot’s implementation at least) the TECS only uses the elevator and throttle to control the height. In such a situation, the TECS might try to use the pitch angle to gain height, but end up changing the turning force in the process. The bank angle is close to 90 degrees, so changing the elevator input will affect the turning radius.

### My approach:

To solve this issue, I took inspiration from my mini-self-driving car project that exploits properties of a Bezier curve to go from state space to control space. The system first creates a quasi-optimal trajectory in state space and then finds the control actions required to follow this trajectory. I came up with the Bezier control system when I ran into the problem of not having enough computational power to run an MPC (see also: being broke) but wanted MPC-like performance.

If I was using an MPC, the MPC would create a trajectory from the vehicle's current pose to the goal or to a point on the trajectory at its horizon. The optimal trajectory would, at a minimum:

• Have the tangent aligned closely with the velocity vector's direction at the vehicle's current pose and goal pose.

• Have C2 continuity.

Thus, instead of "finding" the optimal trajectory, it is possible to use the minimum conditions of optimality as boundary conditions for a parametric curve (Bezier curve). This results in a quasi-optimal state trajectory. The following figure illustrates this idea. The arrows at the end are the direction vectors corresponding to the boundary conditions. If you have a little bit more computation power, you can consider the quasi-optimal trajectory as an "initial guess" for an optimization problem. Note that a cubic Bezier curve would need 4 control points, 2 of which are the endpoints of the trajectory, and the other two lie along the direction vector of the endpoint's pose (due to the initial and final heading constraint). Thus, instead of optimizing for the x,y,z location of the intermediate control points, we only need to optimize for the distance of the intermediate control points from the endpoints. This reduces a 6-dimensional optimization problem(2x 3-D points) into a 2-dimensional optimization problem(2x 1D distance along a line). This provides a significant speed advantage in optimization. If parallel computation is possible, an MPPI based approach would also work on this.

There are, however, limitations to this, such as if the time horizon is too short, it results in significant curvature as the bezier curve tries to force the trajectory to connect to the target. Such limitations are an artifact of the bezier system not caring about the model of the vehicle itself.

In this project, I have tried extending this technique into 3D. In 3D, it uses the direction/tangent vectors and normal vectors of the trajectory to find the body frame rate of rotation. I say vectors(plural) and not vector (singular) because it uses the difference in orientation between the current vector and the vector some ‘S’ distance ahead along the trajectory. This ‘S’ distance is the current speed multiplied by the plane's control time constant (inverse of the controller's bandwidth). The above figure shows an example of a Bezier curve with Normals and direction vectors being represented by the red and green arrows respectively. This is kind of what the controller "sees". Notice how the normal vector rotates around the trajectory. It indicates the direction of turning force required, and thus the "roll" around the trajectory. The direction vector points along the trajectory and gives me the body frame pitch and yaw changes. I also make use of the curvature at every point to find the required turning force, but that is a little harder to visualize as is.

The use of direction+normal vector makes it equivalent to using quaternions. However, it is faster to solve it in vector space rather than find the quaternion for a future position from the vectors and then take the difference of the quaternions. It also allows for a bit more flexibility on how you deal with the normal "flipping" when the body frame vertical acceleration becomes small (This is also a limitation of this method, regardless of whether you use quaternion representation or not).

The system only cares about the difference between the current orientation and the next, and not the actual orientation. The bezier trajectory automatically changes (because a new one is computed every cycle) to accommodate the (cross-track and orientation) error(s). This essentially makes it an "error-state" controller as the system only cares about the difference between the current orientation and the next orientation.

The speed control is naive and uses a separate Feed-forward + P controller. The controller bandwidth depends on the bandwidth of the end-effectors. I assume this bandwidth to be 20 Hz in my demonstration (reasonable for a 2-meter wingspan plane weighing 14 Kilograms).

The actual control system uses the velocity vector’s orientation instead of the aircraft’s orientation as there will be some difference between the two due to side-slip and angle of attack. An integral controller is used for managing the body-frame forces. This is important for reducing the cross-track error and especially useful during landing when (pitch axis)cross-track errors can grow quite a bit due to lack of airspeed. Last but not least, the navigation controller requires a body-frame rotation rate controller that is tuned properly/ set up properly (minimal phase-lag and attenuation)

The system is designed to follow an arbitrary trajectory, but I generate this trajectory by connecting discrete waypoints with bezier curves.

The algorithm can be laid out as follows:

For each control time step:

1. Get the current goal pose and transform it into the body reference frame

2. Construct a cubic-bezier trajectory from the current pose to the goal pose

3. Find the direction vector, normal vector, and curvature on the trajectory at the current location as well as at a location “S” distance ahead. S = control_time_constant*instantaneous_speed.

4. Find the difference in orientation (I use the cross product to find the direction and inverse cosine of the dot product to find the angle).

5. Use the normal vector’s change in orientation to get the change in body-frame roll

6. Use the direction vector’s change in orientation to get the change in body-frame pitch and yaw.

7. Divide the change in roll-pitch-yaw by the control time constant to find the required body-frame rotation rate. Call it Orient_Rate.

8. Use the instantaneous speed and the Curvature found in step 4 to find the required acceleration in the body-frame pitch direction.

9. Find the error between required acceleration and measured acceleration.

10. Divide the above quantity by the current speed to find the equivalent error in the rate of rotation (Acceleration/speed = rate of rotation).

11. Integrate the above rate error. Call it Force_Rate.

12. The net required body rate of rotation is the sum of Force_Rate and Orient_Rate.

13. The speed control uses the steady-state model and a P controller for smaller perturbations

14. Pass rate demands to rate controller and throttle commands directly to the engine servo/ESC.

Video demonstration of the system:

The video on the right shows the simulator view and the video on the left shows the reference trajectory and the plane's pose in rviz. The reference trajectory starts above the plane and is thus not visible until I rotate the camera towards it. The trajectory points are represented by cylinders with a 2-meter diameter. The speed controller is supposed to hold the speed at 255 Kph (~150 Mph) and lower it to 90 Kph for landing. In full-screen mode, you should be able to see the speed, altitude, and other metrics related to the plane itself on the top right of the simulation window.

### Future work:

This system has a lot of unsolved issues which I will resolve in the future:

1. Right now I use a hard “if” condition to deal with the normal vector flipping when there isn’t much g-force requirement (roughly a straight-line). This can lead to rapid flipping behavior in some cases.

2. Speed control could use the global pitch angle to calculate the required change in throttle because the steady-state model currently only considers the horizontal steady-state speed.

3. The look-ahead distance is fixed and quite large. I haven’t tried shorter distances in the favor of stability. I also provide no principled way of knowing what the right look-ahead distance should be.

There are likely a few more issues that I haven’t even faced yet. I see this as an open-ended research problem. The code for this implementation is currently unavailable (if you're a potential Ph.D. advisor or looking for Ph.D. students and would like to see the code, you can send me an email at sidharthtalia[at]ymail[dot]com.

In case you are wondering why this system wasn’t everyone else’s first guess:

1. A conventional system converts pose requirements to force requirements first, which are then converted to orientation requirements, which are then converted to rate requirements. The decoupled nature of such a system makes it easier to debug and build in a principled manner. On the other hand, my control system jumps from the goal pose to the body frame rate requirements directly. I crashed 100s of times during the development of this system. Thankfully, since I am only doing it in simulation, it doesn’t matter.

2. Even though simulation has been used in the past for the development and testing of flight control systems, these simulators were often only accessible to people who were already part of academia. Now that we have these simulators available for the general public, anyone can try out their crazy ideas on them.

The second reason is part of why I like the MuSHR project, and why I like working on constrained systems. Lowering the cost of entry to robotics has a bigger impact than just someone being able to do robotics on the cheap. Democratizing robotics allows us to expose currently untapped potential to the world.