• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Optimal Control

Daedalus12

Rear Admiral
Rear Admiral
It's one of my favorite subject from grad school so I thought I'll talk about it a bit in this forum. I am fucking bored and I got some time to kill before spring break so here it is.

We start with Part I: Birth of Optimal control: the Brachystochrone problem and Calculus of Variations

The birth of the optimal control theory can be arguably traced back to Johann Bernoulli back in 1696. J.B. was the professor of mathematics at the University of Groningen in the Netherlands. At the time there was interesting problem (by interesting I mean unsolved) called the Brachystochrone problem. The problem is as follows: we want to find the path that minimizes the time it takes for the point mass m to slide from A to B (both horizontal distances) under the influence of gravity (pointing downwards).

brachy.jpg


Bernoulli presented this problem as his challenge to all of his contemporary mathematicians. When the deadline finally passed Johann received six solutions. One from himself and the other 5 from Newton, Leibniz, L'Hopital, his own brother Jakob, and Tschirnhaus. All of them submitted correct solutions. It should be noted that Newton published his results to the Royal Society anonymously and without proof however the elegant and beauty of the paper led Johann to conclude ex ungue leonem (you can tell the lion by its claw).

Johann's solution to Brachystochorone problem is derived from using Huygen's principle for refraction of light however it did spur a period of intense study on variational problems (Brachychrostone is one of them) which led to the birth of calculus of variations. Johann's Swiss student Euler (you might've heard of this guy) and the famous French mathematician Lagrange (you might've heard of him also) were key figures in that development.

The classic calculus of variation problem can be stated as the following: find the function q(t) such that the cost functional J with the specified boundary conditions is minimized.
cost.png


The Euler-Lagrange equation is the solution to the classic problem above.
34828736.png


Now I shall skip over the derivation of the Euler-Lagrange equation however it's fairly straightforward and uses the principle of least action i.e. a necessary condition is the first order variation on J must be zero.
13230020.png


Now we look at the Brachystochrone problem in more detail. I am going to show that it can be solved using the more generalized method i.e. Calculus of Variation. Consider the plane of trajectory to be the x-y plane. Now also consider the conservation of energy equation.
10705428.png


In terms of x, y and their time derivatives the velocity of the mass m squared is just the following
64857905.png


hence the conservation of energy equation becomes
36394879.png

Factor the dx and get the following
24452629.png


Now since the problem's goal to minimize the traveling time hence the cost functional to be minimized is
75118662.png


which is obviously in the form of the classic calculus of variation above i.e. we want to minimize the following
63124538.png


hence we will use the Euler-Lagrange equation of the following form to solve the problem
67556447.png


Substitute the L(y,y',x) into the euler-lagrange equation above we get the following Nonlinear Ordinary Differential equation.
17802129.png

The solution trajectory to that ODE above is a cycloid. You can
read about cycloid here.

So in conclusion the birth of optimal control was triggered by a simple variational problem with a very elegant solution.

For more about this you can read from the Horse's mouth i.e. Jan C. Willem's article: The Birth of Optimal Control.
 
Last edited:
Thanks for that Daedalus. :)

I remember doing some Lagrangian mechanics at college, but I found I didn't enjoy it that much. I found that area of mathematics was growing increasingly heavy in calculus, and while that's not a problem in itself, most of the time the work felt devoid of mechanics.

Also, I found some of these techniques we were taught (like the Lagrangian) made it near impossible to visualise what was going on in the equations as they unfolded. That stopped it being fun for me. :( It was the last classical mechanics course I did in fact. My applied courses thereafter were all on fluids, which was more my thing. :)
 
Before I post the Part 2 of optimal control history.

I like to comment on your reply Jadzia

The beauty of the Euler-Lagrange equation is that it's a reformulation of Newton's law using the Principle of Stationary Action.

The Principle of Stationary Action is IMO the most beautiful thing in nature. It's flexibility i.e. it's applicable in classical/relativistic/quantum mechanics and quantum field theory, its simplicity and its ubiquitous character in the universe makes it one key component in the deepest foundation of physics.



Principle of Stationary Action

In nature, of all the possible trajectories of a dynamical system from a starting point at the initial time of t0 to an end point at the time of tf, the true trajectory of system is the one that is a stationary point of the action integral.

For mechanical systems the action integral is just the integral of the total energy (kinetic + potential) of system over time. So in nature a mechanical system will always travel the path in time & space that will minimize this action integral.
 
Alright here is part 2.

Optimal Control History Part 2: The childhood stage: Euler, Lagrange, and Legendre

Now we move on the infant stage of optimal control. At the end of 17th century we saw the trigger (Bernoulli's Brachystochrone challenge) that led to the birth of calculus of variations with . During the next century the sciences and mathematics would explode and grow at an exponential rate. I've mentioned the likes of Euler (Swiss) and Lagrange (French-Italian) and their contributions to the field of calculus of variations. There was also Adrien-Marie Legendre a French mathematician who created numerous advances in abstract Algebra and mathematical analysis. However here we'll skip over his numerous contributions and only talk about his work on the transformation that now bears his name: the Legendre Transformation. We'll also mention his contribution to stating the 2nd order necessary condition for the minimum of a functional.

Before we talk about Legendre transformation I should state clearly that the field of optimal control didn't really start until the Soviet mathematician Lev Pontrayagin (Russian) and his students came along in the 1950s and wrote down the Pontrayagin's Maximum Principle. Up to now I am only talking about the development of the calculus of variations. Optimal control theory as you'll see later on is a much more generalized version of the calculus of variations. In the 250 years between Bernoulli's challenge and Lev Pontraygin's famous principle there were a few people who came very close to formulate optimal control theory however none of them actually succeeded.

In classical mechanics the Lagrangian (the L functional inside the action integral) is the kinetic energy of the system minus the potential energy of the system.
24088892.png

The Legendre transformation applied to the Lagrangian is the following where p(t) is some new state.
21468281.png

The new state p(t) may seem to be arbitrary but it's very important in another reformulation of Newtonian mechanics by a certain Scottish mathematician named William Rowan Hamilton and of course later on in the mathematical theory of optimal control. I'll talk more about the Hamilton reformulation in the next section. Now we focus on the state p(t) which can be characterized as the dual of the system trajectory q(t) and is defined to be the following
11023554.png

Duality is a central concept in higher mathematics as it appears in many important areas. However it's also a very vague concept with no single definition. Suffice to say that for every mechanical system with some trajectory q(t) there is some virtual "companion" state called the dual state or the co-state. There isn't always a physical interpretation for the dual-state however for a moving particle with path q(t) its dual state p(t) as defined above is the linear momentum of the particle.

Legendre also stated the following 2nd order necessary condition for any q(t) that minimizes the functional J. This is just one step further than the first-order necessary condition (stationary principle) by Lagrange in the previous section.
32013256.png

The 2nd order variation necessary condition serves as an additional proof on the minimality of your solution q(t) obtained from using the stationary principle (1st order variation).
 
Okay it's been awhile but fuck between school and life I was swamped. But finally here is part 3.

So far I've shown the Euler-Lagrange equation and the Legendre 2nd order necessary condition for checking optimality. It's also clear that the Euler-Lagrange equation doesn't look like a first-order necessary condition (note there is a time derivative in its first partial derivative term). However we know that it's derived from the first-order variation on the cost functional J. The question is can the Euler-Lagrange formulation changed to arrive at a first-order necessary condition? The answer is of course yes.

Enters the Irish mathematician William Rowan Hamilton who came up with another reformulation of Newtonian mechanics and we call it not surprisingly Hamiltonian mechanics. It should be noted that the Hamiltonian reformulation was eventually generalized further for quantum mechanics.

Now the mathematical formalism of the Hamiltonian mechanics and its generalization to quantum mechanics is quite advanced compared to subject matter at hand. I shall skip those details and simply describe how the Hamiltonian formulation was derived from the Euler-Lagrange equation.

Now given the Legendre transformation above. We shall define the Hamiltonian H as equal to the Legendre transformation of the Lagrangian.
57112363.png

The co-state p for any mechanical system is the generalized momenta of the system. It is equal to the partial derivative of the Lagrangian w.r.t to the generalized velocity. A simple way of looking at this is that p is the momentum vector of the system.
24301459.png

Now some simple calculus reveals the following 2n dimensional first-order dynamical constraints (assume the system is n-dimensional).
35646989.png

24801846.png

The two sets of dynamical constraints written out becomes the following first-order differential equations.
78668441.png

p,q are what is known as canonical coordinates. It's a set of coordinates on the cotangent bundle of the system manifold. As mentioned before Hamiltonian mechanics can be derived from math formalism alone and without resorting to the Euler-Lagrange equation. However that process is even more abstract and even less accessible to people.

Now finally we can express the Euler-Lagrange equation as a first order necessary condition.
52088136.png


At this point you might wonder where is the control input in all of this. It's true so far I've have not shown any equations with a control input. However as you'll see below Hamilton got pretty close to the answer but since he wasn't looking to solve an optimal control problem therefore he didn't even realize it was right before his eyes.

Now let our control input u(t) be equal to the generalized velocity.
29222248.png

You can easily see that the first-order necessary condition above becomes the well-known first-order necessary condition for optimality (in optimal control).
95398071.png

In words the optimal u(t) for the dynamical system must be a stationary point for the Hamiltonian of the system.
 
Last edited:
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top