The PD controller from subsection 4.4.2 has the form
which can be written as
where s is given by
The two gains to be adjusted are thus and . For constant values of , behaves like the proportional gain. Due to the composite gain structure, increases in not only increases the derivative gain, but also results in an increase in effective proportional gain. To increase the derivative gain without increasing the effective proportional gain, must be reduced as is increased.
Increase in effective proportional gain initially reduces regulation errors, however if this gain is increased beyond a certain boundary, limit cycling results. The limit cycling is most pronounced about the vehicle roll axis, which is the axis with the lowest moment of inertia. A possible cause for this limit cycling is interaction between the controller and the attitude estimator. Rotations about the roll axis create angular accelerations which perturb the measurement of the gravity vector which the vehicle assumes to point down (Subsections 4.2.1 and 4.3.3). This causes errors in the estimation of the vehicle attitude which in turn produces control errors. None of the gain sets tested resulted in unstable operation in attitude hold, however very large gains produced severe limit cycling.
If the effective derivative gain is too large, the vehicle begins to experience high frequency thrust perturbations. Examination of the signals within the controller has shown that this thruster noise is caused by increased sensitivity to noise in the angular velocity estimate. This fact limits the level of damping that can be added to the system.
The goal of the PD tuning was determine a set of gains that produce relatively small steady state regulation errors with no significant limit cycling. To this end, the vehicle was commanded to hold attitude constant aligned with the tank axes using the quaternion based PD controller. First, a value for was chosen. With the controller using this value for the derivative gain, the proportional gain was then increased until limit cycling was observed. This process was repeated for several values of . The results are shown in Figure 5-3. Initially, increase in caused a reduction in regulation error. However, if is increased beyond a certain boundary, limit cycling results. Thus for each value of chosen, a maximum value for may be determined, and for that pair of gains, the average regulation error may be observed.
Figure 5-3 Maximum and the corresponding average attitude error as a function of
Notice that the minimum regulation error occurs when
= 2.2. At this point, the maximum value for
is 35, with an average attitude error of 0.7°.
The angular error could be reduced even further if the proportional gain could be increased. Unfortunately, increasing beyond the boundary indicated by the dashed line in Figure 5-3 results in limit cycles due to interaction between the estimator and controller. One possible approach is to use a new gain strategy which reduces the proportional gain as the magnitude of the angular error increases, thus "softening" the response away from equilibrium enough to reduce the susceptibility to limit cycling. The gain remains unmodified near the desired attitude thus maintaining low regulation error. The gain modification creates a new proportional gain as a function of the attitude error.
The algorithm then tests to see if
is less than /5.
If so, it sets =/5,
is reduced to a minimum of /5.
This function is shown in Figure 5-4. The minimum occurs when the
desired attitude is greater than approximately 5° from the
Figure 5-4 Gain modification strategy intended to reduce susceptibility to limit cycling
To test this modified gain, the previous experiments were
repeated: choosing a value for ,
until limit cycling occurred. The results are shown in Figure 5-5.
The new feedback strategy appears to allow higher gains without limit
cycling, and moreover the regulation error is reduced to about
.3° or a factor of 2.3 improvement. The gain set producing this
new minimum regulation error is
= 1. This gain modification strategy was used for all of the
remaining data in this chapter.
Figure 5-5 Comparison of maximum and the corresponding average attitude error as a function of with and without gain modification strategy. (the data points on the solid lines were recorded using the gain values designated by x's on the limit cycle boundary)
Figure 5-6 Attitude regulation error vs. time using gain modification strategy ( =68, =1)