The dynamics of an underactuated ship is

The coordinate transformations

are first used to transform the ship system to

where and . Then the following nontrivial coordinate transformations

are applied to the transformed ship system to obtain the new system:

Let and be strictly positive constants such that . The control and are designed in the paper as

where

It is proven that the closed loop system is GAS at the origin in the paper.

Remark:The stabilizer design mentioned above is nontrivial, it is difficult to extend the control design scheme to solve a trajectory-tracking problem, and the convergence of the stabilizing errors to zero is slow. In addition, the physical meaning of feedback is not clear.

1 | @inproceedings{mazenc2002global, |

A nonlinear dynamic system can usually be represented by a set of nonlinear differential equations in the form

It is directly applicable to feedback control systems. The reason is that the equation can represent the close-loop dynamics of a feedback control system, with the control input being a function of state and time , and therefore disappearing in the closed-loop dynamics.

If the plant dynamics is

and some control law has been selected

then the closed-loop dynamics is

A special class of nonlinear systems are linear systems. The dynamics of linear systems are of the form

where is an matrix.

**Definition:** The nonlinear system is said to be *automomous* if does depend explicitly on time, i.e., if the system’s state equation can be written

Otherwise, the system is called *non-autonomous*.

**Definition:** A state is an *equilibrium state*(or *equilibrium point*) of the system if once is equal to , it remains equal to for all future time.

**Definition:** The equilibrium state is said to be *stable* if, for any , there exists , such that if , then for all . Otherwise, the equilibrium point is *unstable*.

**Definition:** An equilibrium point is asymptotically stable if it is stable, and if in addtion there exists some such that implies that as .

**Definition:** An equilibrium point is exponentially stable if there exist two strictly positive numbers and such that

in some ball around the origin.

In words, the equation means that the state vector of an exponentially stable system converge to the origin faster than an exponential function. The positive number is often called the rate of exponential convergence.

**Definition:** If asymptotic(or exponential) stability holds for any initial states, the equilibrium point is said to be asymptotically(or exponentially) stable *in the large*. It is also called *globally* asymptoticly(or exponentially) stable.

**Definition:** A scalar continuous function is said to be *locally positive definite* if and, in a ball

If and the above property holds over the whole state space, then is said to be *globally positive definite*.

**Definition:** If, in a ball , the function is positive definite and has continuous partial derivatives, and if its time derivative along any state trajectory of system is negative semi-definite, i.e.,

then is said to be a *Lyapunov function* for the system .

**Theorem:(Local Stability)** If, in a ball , there exists a scalar function with continuous first partial derivatives such that

- is positive definite(locally in )
- is negative semi-definite(locally in )

then the equilibrium point is stable. If, actually, the derivative is locally negative definite in , then the stability is asymptotic.

**Theorem:(Global Stability)** Assume that there exists a scalar function of the state , with continuous first order derivatives such that

- is positive definite
- is negative definite
- as .

Then the equilibrium at the origin is globally asympotically stable.

**Definite:** A set is an invariant set for a dynamic system if every system trajectory which starts from a point in remains in for all future time.

**Theorem:(Local Invariant Set Theorem)** Consider an autonomous system of the form , with continuous, and let be a scalar function with continous first partial derivatives. Assume that

- for some , the region defined by is bounded.
- for all in .

Let be the set of all points within where , and be the largest invariant set in . Then, every solution originating in tends to as .

**Theorem:(Global Invarient Set Theorem)** Consider an autonomous system of the form , with continuous, and let be a scalar function with continous first partial derivatives. Assume that

- as .
- over the whole state space.

Let be the set of all points where , and be the largest invariant set in . Then, all solution globally asymptotically converge to as .

**Theorem:(Krasovskii)** Consider the autonomous system defined by , with the equilibrium point of interest being the origin. Let denote the Jacobian matrix of the system, i.e.,

If the matrix is negative definite in a neighborhood , then the equilibrium point at the origin is asymptotically stable. A Lyapunov function for this system is

If is the entire space and, in addition, as , then the equilibrium point is globally asymptotically stable.

**Lemma:(Barbalat)** If the differential function has a finite limit as , and if is uniformly continuous, then as .

**Lemma:(“Lyapunov-Like Lemma”)** If a scalar function satisfies the following conditions

- is lower bounded
- is negative semi-definite
- is uniformly continuous in time

then as .

]]>The kinematic model which describes the geometrical relationship between the earth-fixed(I-frame) and vehicle-fixed(B-frame) motion is given as

The dynamic equations of motion of the vehicle can be expressed in thr B-frame as

The following simplified model can be obtained by assuming that both inertia matrix and the damping matrix are constant and diagonal:

**Remark:** Rewrite

Define the state variables

so that the state equations are given by

where and

Consider the subsystem and letting be the control variables :

Consider the reduced system, restricting considering to , apply the process

to obtain

The feedback control law

where , and and are the gains, yields the reduced closed-loop system

The dynamics can be rewritten as

where

**Remark:**

If , the spectrum of the matrix can be assigned arbitrarily through the gain matrix .

The matrix goes to zero as and

The dynamics can also be rendered globally exponentially stable at the origin by selecting such that the matrix is a Hurwitz matrix.*(Applied Nonlinear Control, Slotine and Li, Section 4.2.2)*

Perturbed linear systems

Consider a linear time-varying system of the formwhere the matrix is the constant and Hurwitz(

i.e., has all eigenvalues strictly in the left-half plane), and the time-varying matrix is such that as

andThen the system is globally expoentially stable.

Restrict consideration and consider the following controller:

where

while avoiding the set

Consider the coordinate transformation

It can be shown that in the above coordinates the closed-loop system can be written as

Remark:

where

Remark:where

The dynamics is globally exponentially stable at . Moreover, it can be easily shown that if , then and go to zero as and

Thus, for any initial condition satisfying and , both the trajectory and the control are bounded for all and converge exponentially to zero.

1 | @article{reyhanoglu1997exponential, |

Structured(or parametric) uncertainties

Corresponds to inaccuracies on terms actually included in the modelUnstructured uncertainties(or unmodeled dynamics)

Corresponds to inaccuracies on(i.e., underestimation of) the system order

Consider a nonlinear system

where the parameter is unknown, we assume that is constant.

Objective:as

For this system the adaptive controller design is in two steps.

We introduce and and consider as a control to stabilize the -system with respect to the Lyapunov function

The -system and the corresponding are:

let , and if

Objective:

Substitute and

we obtain:

where should be taken off.

With , the original system has been transformed into

Choose Lyapunov Function

Objective:

where

Consider a nonlinear system

we assume that the unknown parameter belongs to a known interval

Objective:as

For this system the robust controller design is in two steps.

To apply backstepping, we design a -controller for

Choose Lyapunov function

Several different designs are possible:

where is a smooth switching function.

We now employ backstepping to design -controllers for the second order system.

Choose Lyapunov function:

Choose

and we have

Objective:

We should choose to satisfy :

- When ,
- When ,

where, are the states, is the control input, and are assumed to be known.

Objective:as

Objective1:Objective2:

Subsystem , where

Choose Lyapunov Function

add and minus

Regroup

Objective:Find Control stabilizing the system at

Lyapunov Candidate:

Let , with

Actual Control Law is

- The designer can start the design process at the known-stable system and “back out” new controllers that progressively stabilize each outer subsystem.
- Any strict-feedback system can be feedback stabilized using a straightforward procedure.
- Backstepping design doesn’t require a differentiator.

Artificial neural networks (ANNs) are computational models that are loosely inspired by their biological counterparts. Artificial neurons, which are elementary units in an ANN, appears to have been first introduced by McCulloch and Pitts in 1943 to demonstrate the biological neurons using algorithms and mathematics. An ANN is based on a set of connected artificial neurons. Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. Since the early work of Hebb, who proposed cell assembly theory which attempted to explain synaptic plasticity, considerable effort has been devoted to ANNs. Though perceptron was created by Rosenblatt for pattern recognition by training an adaptive McCulloch-Pitts neuron model, it is incapable of processing the exclusive-or circuit. A key trigger for renewed interest in neural networks and learning was Werbos’s back-propagation algorithm, which makes the training of multi-layer networks feasible and efficient.

Multi-Layered Perceptron(MLP) gains prestige as a feedforward neural network, which has been the most widely used neural networks. Error Back Propagation(BP) algorithm is implemented as a training method for MLPs.

MLP, which is a fully-connected feedforward neural network, uses a supervised learning methods. An MLP consists of an input layer, an output layer and at least one hidden layer. The fundamental structure of MLP is shown schematically in Fig. 1:

Considering the input layer, neurons take effect on passing the information, i.e. the activation functions of this layer are identity functions. Each neurons, who consist the input layer, completely connect with the first hidden layer, thus those neurons hold multiple outputs. By giving neurons in input layer and neurons in hidden layer, each connection is assigned a weight . A bias term added to total weighted sum of inputs to serve as threshold to shift the activation function.The propagation function computes the hidden layer input to the neuron from the outputs of predecessor neurons has the form

where donates the bias term.

The activation function, which defines the output of the node, were chosen as “Hyperbolic tangent” and “Softsign” in this case. Equations, plot and derivative are given by TABLE 1.

MLPs’ scheme essentially has three features:

- There are no connections between the neurons which are in the layers, and neurons shall not have connections with themselves.
- Full connections exist only between the nearby layers.
- It consists of two aspects: the feed-forward transmission of information and the feed-back transmission of error.

Back Propagation is a method used in ANNs to calculate a gradient that is needed in the calculation of the weights between layers, and it is commonly used by the gradient descent optimization algorithm. Calculating adjustments of weights in Fig. 1 is chosen as an example to introduce BP algorithm. We represent the parameter with respect to -th layer by the notation .

For calculating the weights between hidden layer and output layer, loss function is defined as:

where is expected output, is actual output. Based on gradient descent algorithm with learning rate , which controls how much algorithm adjusting the weights of neuron network with respect the loss gradient, and the input of activation function , we have

with

and we have

where donates the error of the output layer, thus, the adjustment of $w$ shall be:

Consider calculating the weights between input layer and output layer:

with

and

where

thus

In summary, the equations describing the adjustment of weights is:

where for output layer is:

for hidden layer and input layer is

respectively.

The process of training MLP with BP algorithm is shown in Algorithm 1.

1 | \begin{algorithm} |

In this section, we have designed different model structures for addressing curve fitting problems.

We designed two different structures, which are shown in Fig. 2.

The structure which has one hidden layer with 600 nodes was applied to fitting and . For , we chosen the structure with 2 hidden layers to reach a better regression performance.

We trained the layers with 9 sample, using as the loss function. The training process will stop when . The results with 361 test samples are shown in Fig. 3:

Runtime loss $E$ and average loss are shown in Fig. 4, where the testing value evaluated by the average loss function is .

We notice that the loss suffers a drastic drop within epochs, then, to decrease the average loss from to , more than epochs are needed.

The absolute error is shown in Fig. 5:

Apparently, max error between actual value and predict value is less than 0.05, which shows the prestige performance when fitting .

When it comes to a more complex nonlinear curve, the fitting error using elementary MLP structure which only has one hidden layer is far from satisfactory. A hidden layer with softsign activation function was added to the basic MLP structure. By implementing the structure with 9 training sample, we arrive at the following results:

Fig. 7 shows the loss decreased over training epochs:

From the figure above, we can conclude that though the loss suffers the chattering within 500,000 epochs, while the average loss obtains a smooth constant rate of descent. Though the training loss which is less than after epochs yields the guaranteed performance on training samples, as we can observed in Fig. 6, the test error(more than 0.2) is unbearable for some test samples. We assumed lacking of train samples has brought about the overfitting of neural network, thus we expanded the training sample with 29 independent (x, y) to reach the better performance.

When average loss is less than 0.005, we obtained the following result, as shown in Fig. 8:

Apparently, has a non-differentiable point in . From Fig. 9, the point which is non-differentiable has the maximal error compared to other points. i.e. providing more training sample can not reinforce the ability of non-linear fitting for a certain MLP model.

Still, adding training sample improves the performance of prediction on test sets. From Fig. 9, though we have a test point which posses the error more than 0.1, absolute errors of other test points are less than 0.02. i.e. this approach provides more robust solutions than 9 training sample with the same loss tolerant. The training loss of this approach is shown in Fig. 10

In this section, let us consider a function which has two inputs. samples are given for training with the basic MLP structure. The hidden layer has 500 nodes, and the training will stop when . Fig. 11 shows the actual curve and the curve predicted with samples respectively.

We plot and , varying from in Fig. 12, from which we observe decrease rapidly from 1.1 to 0.1 within 1000 epochs. Thus, the training process shows the efficiency and robust performance of back propagation algorithm.

The test error is shown as Fig. 13, from which we observe the maximal training error is less than 0.15.

In this report, we implemented multilayer perceptron to provide the solutions to three specific non-linear functions. Simulation results illustrates the accuracy of artificial neural networks and the efficiency of back propagation algorithm.

1 | %% For y = sin(x) |

Inspired by the process of natural selection and principle of genetics, genetic algorithms are implemented to handle various optimization problems. Basic theorem and mathematical derivation has given by J. H. Holland. This research report is intended as a solution, performing optimization on two fitness functions. The reminder of this report is structured as follows. Section 2 reviews the relevant fundamentals of genetic algorithm, which includes the detailed structure with its component parts. In Section 3, the maximal of two fitness function are given with the optimal populations which contain exhaustive revolutionary process. Conclusions of the report are presented in Section 4.

Genetic algorithm is constructed from distinct components, which are the chromosome encoding, the fitness function, selection, crossover and mutation. Starting with a randomly generated population of chromosomes, a genetic algorithm carried out a process of fitness based selection and recombination to produce a successor population, the next generation. During the recombination, parent chromosomes which encoded by binary strings are selected, depending on their fitness. In this case, those chromosomes who can obtain larger function values own larger fitness, and the possibilities of producing offspring increase. Although crossover may decrease the fitness of some individuals, the population fitness will be improved. Mutations ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children. As this process is iterated, a sequence of successive generations evolves and the average fitness of the chromosomes tends to increase until stopping criterion is reached. Two functions which needed to be optimized are defined as follows:

Binary encoding is implemented to represent the chromosome, which is a string of bits, 0 or 1. The bit strings with 10 genes can provide a solution space consisting individuals. Take function as an example, the domain of function is ，and the standard binary encoding can only represent integer, which cannot satisfy the demanded accuracy. Thus, a linear mapping, which map domain: to , is introduced. Obviously, a binary string, which has 18 genes, provides diverse individuals. In conclusion, the chromosome with 18 genes and the chromosome with 15 genes is adopted, encoding the domains of function respectively.

Roulette wheel selection is a genetic operator for selecting potentially useful solutions for recombination. In roulette wheel selection, the fitness function assigns a fitness to generated individuals. The values, which are achieved by dividing the fitness of a selection by the total fitness of all the selections, determine the possibility that those parents can produce offsprings. If is the fitness of individual in the population, its probability of being selected is

where is the number of individuals in the population. This process is introduced in Algorithm 1:

1 | \begin{algorithm} |

Crossover is used to combine the genetic information of two parents to generate new offspring. Bit strings are used to be recombined by crossover. Uniform crossover, which uses binary mask, makes the crossover more flexible. By choosing a random crossover mask with the same length of parent’s chromosome, genes are copied from one parent when there is in the crossover mask, the other genes are copied from the other parent when there is in the crossover mask. Thus, offsprings contain a mixture of genes from each parent. In this case, the probability of crossover is .

Mutation is used to maintain and introduce diversity in the genetic population. Bit flip mutation is implemented in our solutions, a certain number of bits was selected to perform flip. Mutation happens with a low frequency, probability of mutation occurrence is selected as 0.001 to 0.01. In this case, mutation occurred once on 2 random genes whose population has 288 genes per iteration.

Genetic algorithm can be presented by the following algorithm.

1 | \begin{algorithm} |

The proposed algorithm has been implemented on analysis of and , the solution involves finding the maximal and its corresponding locations. The evolutionary processes are also presented in detail.

Apparently, function , whose domain is , acquires the maximum value when , nevertheless, can not be zero. Under this circumstances, the fitness function used in genetic algorithm is defined by the following equation:

where is a small positive real number.

Because of the Roulette Wheel algorithm, which needs positive input, the values calculated by the fitness function may not satisfy that. Thus, we assume that those parents whose fitness is negative cannot produce offsprings. i.e, if the function , let . Algorithm parameter is listed in TABLE 1.

Parameter Name | Value |
---|---|

Iteration | 600 |

Individuals Count | 8, 8 for x, y |

Genes Count | 18 per individual |

Probability of Crossover | 60% |

Probability of Mutation | 0.6% |

Genetic algorithm with 600 iteration finished within 0.133901 seconds on average, the highest fitness value achieved is 0.9998. Those maximal fitness values are recorded on 100th, 200th, 300th iteration and the final iteration. The reason why data dropped between the 300th to final iteration is the fitness remain the same practically. Evolutionary process of 8 groups of individuals is shown in Fig. 1, the maximal fitness and its corresponding individuals are shown in TABLE Maximal fitness and groups, which are marked on the curve of in Fig. 2.

N_th iteration | Optimal Groups | Optimal Fitness |
---|---|---|

100 | x=-0.3374,y=2.3562 | 0.2944 |

200 | x=-0.1838,y=0.6664 | 0.9224 |

300 | x=-0.1836,y=-0.1016 | 0.9904 |

600 | x=-0.0001,y=-0.0352 | 0.9998 |

In Fig. 1, the random generated 8 groups whose fitnesses are around reached 0.9998(fitness) on average at the 230th iteration. We further analyze the varieties of each group, noticing that the improvement of the population’s average fitnesses depends on the mutations occur on the isolated group, which elevated the whole population by crossover. The process can be recognized in 70th, 110th and 260th iteration in Fig. 1.

The function is defined as follows:

where there is no need to tweak the function for satisfying the requirements of genetic algorithm except the domain. in range from , which are considered as the floating number with 4 decimal. That is, we can use chromosomes with 15 bits which have 32768 individuals to encode the domain.

Differ from function which attained maximal value at $(0, 0)$, $f_2$’s maximal value at the “border” of the domain. Evidently, $f_2$ is symmetric with respect to the x-axis and the y-axis, so we investigate on the positive plane. To achieve maximum, for both x, y shall be the largest numbers in the domain which satisfy

Thus, those individuals generated randomly which range from must be rejected to ensure the correctness. In light of this, the following states shall be inserted between line 12-13 in Algorithm 2:

1 | \begin{algorithm}[!htbp] |

Algorithm parameter is listed in TABLE 3

Parameter Name | Value |
---|---|

Iteration | 1200 |

Individuals Count | 8, 8 for x, y |

Genes Count | 15 per individual |

Probability of Crossover | 60% |

Probability of Mutation | 0.8% |

Genetic algorithm with 1200 iteration finished within 0.2818147 seconds on average, the highest fitness value achieved was 3.6996, which is shown on Fig. 3:

Compared to the fix-step logging strategy for , a variable-step strategy was applied to log the evolutionary progress in this section. If a gap with a certain threshold existed between the maximal fitness calculated by the current population and the last population, we call location who gains the first maximal fitness after the gap where M is gap threshold and i is the iteration after a gap. By connecting the nodes, we obtained the evolutionary progress, EP for short. A sample epoch result with 4 nodes is listed in TABLE 4:

N_th iteration | Optimal Groups | Optimal Fitness |
---|---|---|

4 | x=1.6486,y=-0.4975 | 1.6890 |

15 | x=1.8262,y=1.2434 | 2.5558 |

41 | x=1.8514,y=1.4544 | 3,2901 |

157 | x=1.8512,y=1.8498 | 3.6996 |

Fig. 4 shows a sample progress by connecting , as the figure shows, evolutionary progress started at doesn’t have plenty of chatter, hence this sample are considered as an efficient evolutionary progress.

This report has presented the solutions to function optimization problem. In general, we have some interesting observations on genetic algorithms. The evolutionary progress demonstrated genetic algorithm may show an ultimate efficiency though lack robustness hence the stop criterion is not clear in every problem. Moreover, we noticed that genetic algorithms have a tendency to converge towards local optimum, an evolutionary progress may stuck at some certain points for hundreds of iterations, depending on random mutations. Another problem is we can’t simply deploy parallel computing to accelerate the evolutionary progress because the new population are generated by the last population which has a sequential logic.

1 | clear |