Jekyll2023-01-17T08:38:34-05:00/feed.xmlYour awesome titleWrite an awesome description for your new site here. You can edit this line in _config.yml. It will appear in your document head meta (for Google search results) and in your feed.xml site description.
GitHub UserCvoc_reading_notes2020-06-08T00:00:00-04:002020-06-08T00:00:00-04:00/2020/06/08/cvoc_reading_notes<head>
<link rel="stylesheet" href="../../../assets/css/simple.css" />
</head>
<h1 id="reading-notes-of-the-book-calculus-of-variation-and-optimal-control-by-daniel-liberzon">Reading notes of the book <em>Calculus of variation and optimal control</em> by <em>Daniel Liberzon</em>.</h1>
<p>This note is not a summary of the book chapters. It mainly contains the problems I encounter during my reading and my answers (if there is any) to those problems.</p>
<h2 id="table-of-content">Table of Content</h2>
<ul id="markdown-toc">
<li><a href="#reading-notes-of-the-book-calculus-of-variation-and-optimal-control-by-daniel-liberzon" id="markdown-toc-reading-notes-of-the-book-calculus-of-variation-and-optimal-control-by-daniel-liberzon">Reading notes of the book <em>Calculus of variation and optimal control</em> by <em>Daniel Liberzon</em>.</a> <ul>
<li><a href="#table-of-content" id="markdown-toc-table-of-content">Table of Content</a></li>
<li><a href="#chapter-1-" id="markdown-toc-chapter-1-">Chapter 1 <a name="chp1"></a></a> <ul>
<li><a href="#continuity-of-epsilond-in-second-order-sufficient-condition-p6" id="markdown-toc-continuity-of-epsilond-in-second-order-sufficient-condition-p6">Continuity of $\epsilon(d)$ in second-order sufficient condition (P6)</a></li>
<li><a href="#regular-point-p9" id="markdown-toc-regular-point-p9">Regular point (P9)</a></li>
<li><a href="#tangent-space-p9" id="markdown-toc-tangent-space-p9">Tangent space (P9)</a></li>
<li><a href="#first-order-necessary-condition--lagrange-multipliers-p9" id="markdown-toc-first-order-necessary-condition--lagrange-multipliers-p9">First-order necessary condition — Lagrange multipliers (P9)</a></li>
<li><a href="#exercise-13" id="markdown-toc-exercise-13">Exercise 1.3</a></li>
<li><a href="#exercise-14" id="markdown-toc-exercise-14">Exercise 1.4</a></li>
<li><a href="#linearity-of-gateaux-derivative-" id="markdown-toc-linearity-of-gateaux-derivative-">Linearity of Gateaux derivative <a name="gateaux"></a></a></li>
<li><a href="#exercise-15-and-16" id="markdown-toc-exercise-15-and-16">Exercise 1.5 and 1.6</a></li>
<li><a href="#necessary-conditions-with-frechet-derivative" id="markdown-toc-necessary-conditions-with-frechet-derivative">Necessary conditions with Frechet derivative</a></li>
<li><a href="#exercise-17" id="markdown-toc-exercise-17">Exercise 1.7</a></li>
</ul>
</li>
<li><a href="#chapter-2" id="markdown-toc-chapter-2">Chapter 2</a> <ul>
<li><a href="#exercise-22" id="markdown-toc-exercise-22">Exercise 2.2</a></li>
</ul>
</li>
</ul>
</li>
</ul>
<p><strong>TODO</strong>:</p>
<ul class="task-list">
<li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" /><a href="#cont-eps">Check the proof of second order condition.</a></li>
<li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" /><a href="#tangent-space">Fill in the proof of Tangent space</a></li>
<li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" checked="checked" /><a href="#gateaux">Linearity of Gateaux derivative</a></li>
</ul>
<h2 id="chapter-1-">Chapter 1 <a name="chp1"></a></h2>
<hr />
<h3 id="continuity-of-epsilond-in-second-order-sufficient-condition-p6">Continuity of $\epsilon(d)$ in second-order sufficient condition (P6)</h3>
<p>The book claims that $\epsilon$ as a function of $d$ is continuous and thereby Weierstrass theorem can be applied. But we cannot write down the explicit expression of $\epsilon$. So it is not easy to verify the continuity argument. Fortunately, a proof of the second-order sufficient condition is given by Prof. Llya in AE 575. <a name="cont-eps"></a></p>
<h3 id="regular-point-p9">Regular point (P9)</h3>
<p>There is a graphic interpretation on points that are not regular.</p>
<p><img src="../../../assets/images/regular_point.svg" alt="example of points not regular" /></p>
<h3 id="tangent-space-p9">Tangent space (P9)</h3>
<p>Let $M$ be a submanifold of $R^{n}$ and let $a\in M$. The tangent space of $M$ at $a$ is given by
\[T_{a} M = \lbrace v\in R^{n} \mid \exists \gamma: (-s,s) \rightarrow R^{n}: \gamma(t) \in M \text{ for same } s > 0 \text{ and } t\in (-s,s), \gamma(0) =a, \dot{ \gamma}(0)= v \rbrace.\]
<em>(this definition is copied from Dr. Richter’s <a href="https://academic.csuohio.edu/richter_h/courses/esc694.html">course materials</a>)</em></p>
<p>According to the book, if the submanifold is given by $M = \lbrace x \mid h_{i}(x) = 0, i=1,..,m\rbrace$ and $a\in M$ is a regular point, then the tangent space $T_{a}M$ at $a$ can be characterized by</p>
<p>\[T_{a}M = \lbrace d\in R^{n} \mid <d, \bigtriangledown h_{i}(a)> = 0, i=1,…,m \rbrace.\]</p>
<p>The discussion in the book can be summarized by the following theorem:</p>
<div class="theorem" name="Theorem "></div>
<p>Set $D = \lbrace d\in R^{n} \mid <d, \bigtriangledown h_{i}(a)>=0, i=1,…,m\rbrace$. Then, $T_{a}M \subseteq D$. If $a$ is a regular point, then $T_{a}M = D$.</p>
<p><strong>Question:</strong> How to prove that for any $d\in T_{a}M$, there is a curve $\gamma (\cdot)\in \mathcal{C}^{1}$ such that $\gamma (0) = a$ and $\gamma’(0) =d$ (assume that $a$ is a regular point)? See recitation 3 of AE 575. <a name="tangent-space"></a></p>
<h3 id="first-order-necessary-condition--lagrange-multipliers-p9">First-order necessary condition — Lagrange multipliers (P9)</h3>
<p>Given that $x^{*}$ is a regular point, the first order necessary condition is<br />
\[\bigtriangledown f(x^{\ast}) + \lambda_{1}^{\ast} \bigtriangledown h_1(x^{\ast}) + … + \lambda_{m}^{\ast} \bigtriangledown h_{m}(x^{\ast}) = 0.\]</p>
<p>The idea behind this condition is fairly intuitive:</p>
<ul>
<li>First, the actual first order necessary condition is $\bigtriangledown f(x^{\ast}) \cdot d = 0$ for all $d \in T_{a}M$, where $M = \lbrace x \mid h_{i}(x) = 0, i=1,…,m\rbrace$. This can be easily seen from the first order Taylor expansion of $f(x(\alpha))$ for some curve $x(\alpha)\in \mathcal{C}^{1}$ through $x^{\ast}$.</li>
<li>Second, if $x^{\ast}$ is a regular point, then $T_{a}M$ can be characterized as $\lbrace d \mid <d, \bigtriangledown h_{i}> = 0, i=1,…,m\rbrace$. Based on this observation, we can rewritten the actual necessary condition in the first bullet point as in form of Lagrange multipliers.</li>
</ul>
<h3 id="exercise-13">Exercise 1.3</h3>
<p><strong>Proof:</strong> Define
\[F: ( \alpha_1,…, \alpha_{m+1}) \rightarrow ( f(x^{\ast} + \alpha_1 d_1 + … \alpha_{m+1}d_{m+1}), h_1(x^{\ast} + \alpha_1 d_1 + … \alpha_{m+1}d_{m+1}), …, h_{m}(x^{\ast} + \alpha_1 d_1 + … \alpha_{m+1}d_{m+1})).\]</p>
<p>The Jacobian of $F$ is</p>
<p>\begin{align}
\label{eqn:JF}
\begin{pmatrix}<br />
\bigtriangledown f(x^{\ast})\cdot d_1 & \bigtriangledown f(x^{\ast})\cdot d_2 & … & \bigtriangledown f(x^{\ast})\cdot d_{m+1} \\\<br />
\bigtriangledown h_{1}(x^{\ast}) \cdot d_1 & \bigtriangledown h_{1}(x^{\ast}) \cdot d_2 & … & \bigtriangledown h_{1}(x^{\ast}) \cdot d_{m+1} \\\<br />
… & … & … & …\\\<br />
\bigtriangledown h_{m}(x^{\ast}) \cdot d_1 & \bigtriangledown h_{m}(x^{\ast}) \cdot d_2 & … & \bigtriangledown h_{m}(x^{\ast}) \cdot d_{m+1}
\end{pmatrix} =
\begin{pmatrix} \bigtriangledown f(x^{\ast}) \\\\ \bigtriangledown h_1(x^{\ast}) \\\\ \vdots \\\\ \bigtriangledown h_{m}(x^{\ast}) \end{pmatrix} \cdot
\begin{pmatrix} d_1 & d_2 & … & d_{m+1}
\end{pmatrix}.
\end{align}</p>
<p>Suppose that the Jacobian \eqref{eqn:JF} is not singular. Then, there exists neighbors $U$, $V$ around $0$ and $(f(x^{\ast}),0,…,0)$ such that $F\vert_{U}: U \rightarrow V$ is a bijection. By the same argument as in $1$-d case, we know that the Jacobian of $F$ has to be singular.</p>
<p>Since $\lbrace \bigtriangledown h_{i}(x^{\ast}) \rbrace_{i=1}^{m}$ are linearly independent, $m \leq n$.</p>
<p>Case I: $n=m$. Then $\lbrace \bigtriangledown h_{i}(x^{\ast}) \rbrace_{i=1}^{m}$ is a basis of $R^{n}$. $\bigtriangledown f$ must be in span of $\lbrace \bigtriangledown h_{i}(x^{\ast})\brace_{i}$.</p>
<p>Case II: $n>m$. We know that \eqref{eqn:JF} is singular for any choice of $d_{i}$. Thus, we pick $d_{1} = \bigtriangledown f(x^{\ast})$ and $d_{i+1} = \bigtriangledown h_{i}$ for $i\geq 1$. Since we know that for any matrix $D$, $rank(DD^{T}) = rank(D)$. Thus, we know that the matrix</p>
<p>\begin{align}
\begin{pmatrix} \bigtriangledown f(x^{\ast}) \\\\ \bigtriangledown h_1(x^{\ast}) \\\\ \vdots \\\\ \bigtriangledown h_{m}(x^{\ast}) \end{pmatrix}
\end{align}
is singular, that is not full rank. Since $m+1 \leq n$, we know that this is a fat matrix and the row has to be linearly dependent. Also, since $\bigtriangledown h_{i}(x^{\ast})$ for $i=1,…,m$ are linear dependent, $\bigtriangledown f(x^{\ast})$ must be in the span of $\lbrace \bigtriangledown h_{i}(x^{\ast})\rbrace_{i}$. \[\tag*{$\blacksquare$}\]</p>
<h3 id="exercise-14">Exercise 1.4</h3>
<p><strong>Proof:</strong></p>
<p><img src="../../../assets/images/ex1_4.svg" alt="Light reflection" /></p>
<p>Let $y = (y_1,y_2)$, $z=(z_1, z_2)$. We want to solve
\begin{align}
\min_{x} &~ \Vert x-y\Vert_{2} + \Vert x-z\Vert_{2} \\\<br />
\text{subject to } & h(x) = 0.
\end{align}</p>
<p>The Lagrange is
\begin{align}
L(x, \lambda) = \Vert x-y\Vert_{2} + \Vert x-z\Vert_{2} + \lambda h(x).
\end{align}
with gradient
\begin{align}
\bigtriangledown L_{x} = \frac{x-y}{\Vert x-y\Vert } + \frac{x-z}{ \Vert x-z\Vert } + \lambda \bigtriangledown h (x).
\end{align}</p>
<p>According to the first-order necessary condition, the optimal solution $x^{\ast}$ satisfies $\bigtriangledown L_{x}(x^{\ast}) = 0$, that is
\begin{align}
\frac{y-x^{\ast}}{\Vert x^{\ast}-y\Vert } + \frac{z-x^{\ast}}{ \Vert x^{\ast}-z\Vert } = \lambda \bigtriangledown h (x^{\ast}). \label{ex_1_4}
\end{align}</p>
<p>Case 1: $\lambda \not= 0$. Then, \eqref{ex_1_4} implies $\bigtriangledown h(x^{\ast})$ divides the angle between $\vec{xy}$ and $\vec{xz}$ evenly.</p>
<p>Case 2: $\lambda= 0$. Then, $h(x^{\ast})$ lies on the line segment connecting $y$ and $z$. We argue that this is the case where $y$ and $z$ are not on the same side of $h(x)$. \[\tag*{$\blacksquare$}\]</p>
<h3 id="linearity-of-gateaux-derivative-">Linearity of Gateaux derivative <a name="gateaux"></a></h3>
<p>In Section 1.3.2 of the book, the author said that the first variation of $J$, which is an analogous of Gateaux derivative in finite dimension, needs to be a liear functional. I am wondering if linearity is a property of Gateaux derivative.</p>
<p>Actually the answer to this question depends the context, according to <a href="https://www.wikiwand.com/en/Gateaux_derivative">Wiki page</a>. Generally speaking, Gateaux derivative is not necessarily linear or continuous, but some definition of Gateaux differential requires the linearity, as in our book.</p>
<p>Later, we will see that though Gateaux derivative may not be linear, Frechet derivative by definition needs to be a bounded linear transformation.</p>
<h3 id="exercise-15-and-16">Exercise 1.5 and 1.6</h3>
<p><strong>Proof:</strong></p>
<p>\begin{align}
\delta J \vert_{y}( \eta) &= \lim_{ \alpha \rightarrow 0} \frac{1}{ \alpha} \left( \int_{0}^{1}g(y(x) + \alpha \eta(x)) dx - \int_{0}^{1}g(y(x)) dx \right)\\\<br />
&= \lim_{ \alpha \rightarrow 0} \int_{0}^{1} \frac{g(y(x) + \alpha \eta(x)) - g(y(x))}{ \alpha}dx
\end{align}
We can use <a href="https://www.wikiwand.com/en/Leibniz_integral_rule">Leibniz integral rule</a> to exchange the order of limit and integral sign. The technical conditions checked for applying Leibniz integral rule are the continuity of $g(y(x) + \alpha \eta(x))$ in $\alpha$ and $x$ and the continuity of $dg(y(x) + \alpha \eta(x))/ d \alpha = g’(y(x)+ \alpha \eta(x)) \eta(x)$. Since $g\in \mathcal{C}^{1}$ and $y, \eta\in \mathcal{C}^{0}$, those technical conditions are satisfied (the ideas behind Leibniz integral rule are mean value theorem and dominated convergence theorem). Thus, we have
\begin{align}
\delta J \vert_{y} ( \eta ) = \int_{0}^{1} g’(y(x) ) \eta(x) dx
\end{align}</p>
<p>Now we suppose that $g\in \mathcal{C}^{2}$ and derive the second variation of $J$. By definition,
\begin{align}
\delta^{2} J \vert_{y} ( \eta) &= \lim_{ \alpha \rightarrow 0} \frac{J(y+ \alpha \eta)- J(y) - \delta J\vert_{y}( \eta) \alpha}{ \alpha^{2}} \\\<br />
&= \lim_{ \alpha \rightarrow 0} \frac{1}{\alpha^{2}} \int_{0}^{1} g(y(x)+ \alpha \eta) - g(y(x)) - g’(y(x)+ \alpha \eta) \eta(x) \alpha dx
\label{ex_1_5_dJ_1}
\end{align}</p>
<p>To make things easier, let’s first ignore any potential technical conditions and compute the second derivative of $J(y + \alpha \eta)$ with respect to $\alpha$.</p>
<p>\begin{align}
\delta^{2} J\vert_{y} (\eta) &= \frac{1}{2} \frac{d^{2}}{d \alpha^{2}} J(y+ \alpha \eta)\bigg\vert_{\alpha=0}\\\<br />
&= \frac{1}{2}\frac{d}{d \alpha} \int_{0}^{1} g’(y(x) + \alpha \eta(x)) \eta(x) dx \bigg\vert_{\alpha=0}\\\<br />
&= \frac{1}{2} \int_{0}^{1} g’‘(y(x) + \alpha \eta) \eta(x)^{2} dx \bigg\vert_{\alpha=0}\\\<br />
&= \frac{1}{2} \int_{0}^{1} g’‘(y(x)) \eta(x)^{2} dx. \label{ex_1_5_dJ_2}
\end{align}</p>
<p>Clearly, the last row is a quadratic functional. However, we still need to show that \eqref{ex_1_5_dJ_2} is indeed equal to \eqref{ex_1_5_dJ_1}. To do that, we need some technical assumptions.</p>
<p>We have the second-order Taylor expansion of $g$ with respect to $\alpha$:</p>
<p>\begin{align}
g(y(x) + \alpha \eta(x)) = g(y(x)) + g’(y(x)) \eta \alpha + \frac{1}{2} g’‘(y(x)) \eta^{2} \alpha^{2} + o( \alpha^{2}). \label{ex_1_5_g_expan}
\end{align}</p>
<p>We plug the expansion of $g$ into \eqref{ex_1_5_dJ_1} and get</p>
<p>\begin{align}
\delta^{2} J\vert_{y}( \eta) = \lim_{ \alpha \rightarrow 0} \frac{1}{ \alpha^{2}} \int_{0}^{1} \frac{1}{2} g’‘(y(x)) \eta^{2} \alpha^{2} + o( \alpha^{2}) dx
\end{align}
By continuity of $g$, $g’$, $y$, $\eta$, we know that $1/2 g’‘(y(x)) \eta^{2} \alpha^{2} + o( \alpha^{2})$ is continuous and bounded for $x\in [0,1]$, by Weierstrass theorem. Thus, by dominated convergence theorem, we can exchange the limit and the integral and get</p>
<p>\begin{align}
\delta^{2} J\vert_{y} (\eta) = \int_{0}^{1} \frac{1}{2} g’‘(y(x)) \eta^{2} dx.
\end{align}</p>
<p>\[\tag*{$\blacksquare$}\]</p>
<h3 id="necessary-conditions-with-frechet-derivative">Necessary conditions with Frechet derivative</h3>
<p>The book makes a remark that the first and second-order derivative can also be defined using Frechet derivative. But it is not clear how the first and second order conditions are derived from Frechet derivation.</p>
<p>First, we need a formal definition of the Frechet derivative.</p>
<div class="definition" name="Definition "></div>
<p>(Optimization by Vector Space Methods, David G. Luenberger, Page 172)</p>
<p>Let $T$ be a transformation defined on an open domain $D$ ina normed space $X$ and having range in a normed space $Y$. If for fixed $x\in D$ and each $h\in X$ there exists $\delta T(x; h)\in Y$ which is linear and continuous with respect to $h$ such that</p>
<p>\begin{align}
\lim_{\Vert h\Vert \rightarrow 0} \frac{ \Vert T(x+h) - T(x) - \delta T (x;h)\Vert }{\Vert h\Vert } = 0
\end{align}</p>
<p>then $T$ is said to be Frechet differentiable at $x$ and $\delta T(x;h)$ is said to be the Frechet differential of $T$ at $x$ with increment $h$. \[\tag*{$\blacksquare$}\]</p>
<p>Later in Luenberger’s book, $\delta T(x;h)$ is denoted by $\delta(T(x;h))=A_{x}h$ where $A_{x}$ is a bounded linear operator from $X$ to $Y$. If $T$ is Frechet differentiable on $D \subseteq X$, the transformation $T’:D \rightarrow B(x,Y)$, $T’ (x) =A_{x}$, is called the Frechet derivative of $T$.</p>
<div class="theorem" name="Theorem "></div>
<p>Suppose that $y^{\ast}$ is a minimum and<br />
\begin{align}
J(y^{\ast}+ \eta) = J(y^{\ast}) + \delta J \vert_{y^{\ast}}( \eta) + o( \Vert \eta\Vert),
\end{align}
that is $\delta J\vert_{y}(\eta)$ is the Frechet differential of $J$ at $y$ with increment $\eta$.</p>
<p>Then $\delta J \vert_{y^{\ast}}(\eta)=0$ for all $\eta\in V$.</p>
<p><strong>Proof:</strong>
Suppose that there exists a $\eta_0$ such that $\delta J \vert_{y^{\ast}}(\eta_0)\not=0$. WLOG, assume $\delta J \vert_{y^{\ast}}(\eta_0)$ is less than $0$ (otherwise take $-\eta_0$). We construct $\eta_{n} = \eta_0/n$. Then, we know that $\eta_{n} \rightarrow 0$. Thus, by definition,
\begin{align}
\lim_{n \rightarrow 0} \frac{ \Vert T(x+\eta_0/n)- T(x) -\delta T\vert_{y}(\eta/n)\Vert }{\Vert\eta_0\Vert/n } = 0
\end{align}
Take $\epsilon = \vert \delta T \vert_{y}(\eta_0) \vert /2 \Vert \eta_0\Vert$. There exists $N \in \mathbb{N}$ such that for all $n \geq N$,
\begin{align}
\Vert T(x+\eta_0/n)- T(x) -\delta T\vert_{y}(\eta/n)\Vert \leq \epsilon \Vert \eta_0\Vert /n = \vert \delta T \vert_{y}( \eta_0) \vert / (2n)\\\<br />
T(x+\eta_0/n)- T(x) -\delta T\vert_{y}(\eta/n) \leq \vert \delta T \vert_{y}( \eta_0) \vert / (2n)\\\<br />
T(x+\eta_0/n)\leq T(x)+ \delta T\vert_{y}(\eta)/n + \vert \delta T \vert_{y}( \eta_0) \vert / (2n) < T(x).
\end{align}
\[\tag*{$\blacksquare$}\]</p>
<div class="theorem" name="Theorem "></div>
<p>Suppose that</p>
<p>\begin{align}
J(y^{\ast}+ \eta) = J(y^{\ast}) + \delta J \vert_{y^{\ast}}( \eta) + \delta^{2} J \vert_{y} (\eta)+ o( \Vert \eta\Vert^{2}).
\end{align}</p>
<p>If $\delta J \vert_{y^{\ast}}(\eta)=0$ and $\delta^{2}J\vert_{y^{\ast}} (\eta) \geq \lambda \Vert \eta\Vert^{2}$ for any $\eta\in V$, then $y^{\ast}$ is a minimum.</p>
<p><strong>Proof:</strong>
\begin{align}
J(y^{\ast}+ \eta) = J(y^{\ast}) + \delta^{2} J \vert_{y} (\eta)+ o( \Vert \eta\Vert^{2}) \geq J(y^{\ast}) + \lambda \Vert \eta \Vert^{2} + o( \Vert \eta\Vert^{2}).
\end{align}</p>
<p>For $\eta$ small enough, we can have
\begin{align}
\frac{o(\Vert \eta\Vert^{2})}{ \Vert \eta\Vert^{2}} < \lambda
\end{align}
That is, $\lambda \Vert \eta\Vert^{2} + o(\Vert \eta\Vert^{2})>0$ and thereby $J(y^{\ast} + \eta )> J(y^{\ast})$. \[\tag*{$\blacksquare$}\]</p>
<h3 id="exercise-17">Exercise 1.7</h3>
<p>Consider the space $C[0,1]$ of continuous functions equipped with sup norm. Convergence in $C[0,1]$ is the same as uniform convergence in calculus. Define $A = \lbrace f\in C[0,1] \mid \Vert f\Vert =1\rbrace $ the unit circle. $A$ is closed and bounded. Now, $\lbrace f_{n} = x^{n}\rbrace_{n=1}^{\infty}$ is a sequence contained in $A$, which does not have a convergent subsequence. Thus, $A$ is closed and bounded, but not compact.</p>
<p>Define</p>
<p>\begin{align}
J(f) = \int_{0}^{1} \vert f(x) \vert dx.
\end{align}</p>
<p>To show $J$ is continuous, note that</p>
<p>\begin{align}
\vert J(f) - J(g) \vert = \vert \int_{0}^{1} f- g \vert \leq \int_{0}^{1} \vert f-g \vert \leq \Vert f-g\Vert.
\end{align}</p>
<p>That is, $J$ is Lipschitz continuous.</p>
<p>It is clearly that the infimum of $J$ over $A$ is $0$ ($J(x^{n}) \rightarrow 0$ as $n$ goes to $\infty$), but there exists no $f$ in $A$ such that $J(f) = 0$, by continuity of $f$.</p>
<h2 id="chapter-2">Chapter 2</h2>
<h3 id="exercise-22">Exercise 2.2</h3>
<p>$y = 0$ is a weak minima. For any $z$ such that $\Vert z-y\Vert_{1} \leq 0.5$, $\vert z’ \vert \leq 0.5$. Thus, $(z’(x))^{2} (1 - (z’(X))^{2}) \geq 0$. Therefore,</p>
<p>\begin{align}
J(x) = \int_{0}^{1} \left( z’(x) \right)^{2} (1- \left( z’(x) \right)^{2}) dx \geq 0 = J(y).
\end{align}</p>
<p>By definition, $y$ is a weak minima.</p>
<p>$y=0$ is not a strong minima, because for any $\epsilon > 0$, there exists $z\in \mathcal{C}^{0}$ such that $\vert z(x)\vert \leq \epsilon$ for all $x\in [0,1]$ but $\vert z’(x) \vert = 2$ almost everywhere (that is a triangular wave signal). For such a $z$,
\begin{align}
J(z) = \int_{0}^{1} 4 \cdot -3 dx = -12.
\end{align}</p>GitHub UserHello world!2020-01-02T00:00:00-05:002020-01-02T00:00:00-05:00/2020/01/02/Hello%20World!<p>Hello world!</p>
<script type="math/tex; mode=display">e^{i\pi} + 1 = 0</script>GitHub UserHello world!