Ph.D. Comprehensive exam practice problems, Round 2

Exercise 1 Let {V} be the vector space of continuous real-valued functions on the interval {[0,\pi]}. Then, for any {f \in V},

\displaystyle 2 \int_0^\pi f(x)^2 \sin x dx \geq \left(\int_0^\pi f(x) \sin x dx\right)^2.

Proof: Let {d\mu} be the measure {\frac{\sin x dx}{2}} on {[0,\pi] = X}. Then {(X, d\mu)} is a probability space, {f} is Lebesgue-integrable on {X} and {t \mapsto t^2} is a convex function {\mathbf R \rightarrow \mathbf R}. By Jensen’s inequality,

\displaystyle \int_0^\pi f(x)^2 d\mu \geq \left(\int_0^\pi f(x) d\mu\right)^2.

Multiplying throughout by {4} we get the claimed inequality.

Exercise 2 Let {T} be a linear operator on a finite-dimensional vector space {V}. (a) Prove that if every one-dimensional subspace of {V} is {T}-invariant, then {T} is a scalar multiple of the identity operator. (b) Prove that if every codimension-one subspace of {V} is {T}-invariant, then {T} is a scalar multiple of the identity operator.

Proof: (a) The hypothesis means that every nonzero vector of {V} is an eigenvector of {T}. Suppose {v_1, v_2} are eigenvectors of {T} with eigenvalues {\lambda_1}, {\lambda_2}. Since, by assumption {v_1+v_2} is also an eigenvector, and {v_1} and {v_2} are independent, we can read off the eigenvalue of {v_1 + v_2} off of either coefficient in the equation {T(v_1+v+2)= \lambda_1 v_1 + \lambda_2 v_2}, and therefore {\lambda_1 = \lambda _2}. Therefore {T} is a multiple of the identity operator.

(b) Let {T^\vee} be the dual operator on {V^\vee}. We claim that {T^\vee} satisfies the condition of {(a)}. First, we have the following:

Lemma 1 Two functionals {f, g : V \rightarrow k} (where {k} is the ground field) have the same kernel if and only if they are multiples of each other.

Proof: Indeed, it is trivial if either of {f} or {g} is {0} (in which case both are zero), so suppose neither is {0}. Recall that if {W \subseteq V^\vee} and we define {\mathrm{Ann}(W) = \{v \in V : f(v) = 0\: \: \forall w \in W\}}, then we have a canonical isomorphism {\mathrm{Ann}(W) \cong (V^\vee/W)^\vee}, which in particular implies {\dim \mathrm{Ann}(W) = \mathrm{codim}(W\subseteq V^\vee)}. If we apply this to {W=\left<f,g\right>}, we have, under assumption,

\displaystyle \mathrm{Ann}(W) = \ker f \cap \ker g = \ker f = \ker g

which has codimension {1} since {f,g \neq 0}. Therefore {W} has dimension {1}, and {f} and {g} are scalar multiples of each other. \Box

Now, back to {(b)}. S suppose that {0 \neq f \in V^\vee}. Then {\ker f} has codimension {1} in {V}, and therefore, under the hypothesis of (b), {T(\ker f) \subseteq \ker f}. This implies {\ker T^\vee(f) \supseteq \ker f}; indeed, if {v \in \ker f}, then {T^\vee(f)(v) = f(Tv) = 0 } since {Tv \in \ker f}. Since {\ker f} is codimension {1}, we either have equality, or {T^\vee(f) = 0}. If there is equality, then {T^\vee(f)} and {\ker f} have the same kernel and therefore they are proportional, i.e. {f} is an eigenvector of {T^\vee}. If {T^\vee(f)=0} then {f} is trivially an eigenvector of {T^\vee}. In every case, we see that {f} is an eigenvector of {T^\vee}. By (a), {T^\vee}, and therefore {T}, is a multiple of the identity operator. \Box

Exercise 3 Let {T} be a linear operator on a finite-dimensional inner product space {V}.

  • (a) Define what is meant by the adjoint {T^*} of {T}.
  • (b) Prove that {\ker T^* = \mathrm{im}(T)^\perp}.
  • (c) If {T} is normal, prove that {\ker T = \ker T^*}. Give an example when the equality fails (and, of course, {T} is not normal).


  • (a) It is the unique linear operator {T^*} on {V} such that {\left<Tv, w\right> = \left<v, T^*w\right>} for every {v, w \in V}.
  • (b) Indeed,

    \displaystyle  \begin{array}{rcl}  v \in \ker T^* &\Leftrightarrow& \left<w, T^*v\right> = 0 \: \forall w \in W \\ &\Leftrightarrow& \left<Tw, v\right> = 0 \: \forall w \in W \\ &\Leftrightarrow& v \perp T(w)\: \forall w \in W. \end{array}

  • (c) A normal operator is one which commutes with its adjoint, i.e. {TT^* = T^*T}. Thus,

    \displaystyle  \begin{array}{rcl}  v \in \ker T^* &\Leftrightarrow& \left<T^*v, T^*v\right> = 0\\ &\Leftrightarrow& \left<TT^*v, v\right> = 0 \\ &\Leftrightarrow& \left<T^*Tv, v\right> = 0 \\ &\Leftrightarrow& \left<Tv, Tv\right> = 0\\ &\Leftrightarrow& Tv=0. \end{array}

    An example where the equality fails is supplied by the operator {T=\left(\begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right)} acting on {(\mathbf R^2, \bullet)} in the standard way. The vector {(1,-1)} is in the kernel of {T} but not of {T^*}.


Ph.D. Comprehensive exam practice problems, Round 1

In May, I will be taking the qualifying exams for my Ph.D.. Over the next few weeks, I will be posting practice problems and my solutions to them. Until the end of February, I will be reviewing linear algebra, single variable real analysis, complex analysis and multivariable calculus. In March and April, I will be focusing on algebra, geometry and topology.

Here are three problems to start.

Problem: Suppose that {A} is an {n \times n} real matrix with {n} distinct real eigenvalues. Show that {A} can be written in the form {\sum_{j=1}^n \lambda_j I_j} where each {\lambda_j} is a real number and the {I_j} are {n\times n} real matrices with {\sum_{j=1}^n I_j = I}, and {I_jI_l = 0} if {j \neq l}. Give a {2 \times 2} real matrix {A} for which such a decomposition is not possible and justify your answer.

Solution: for each {j}, let {E_j} denote the matrix with a {1} on the entry {(j,j)} and zeroes everywhere else. Then {\sum_j E_j = I} and {E_jE_l= 0} when {j\neq l}. Since {A} has {n} distinct real eigenvalues {\lambda_1, \dots, \lambda_n}, it is diagonalizable over {\mathbf R}, so there is a real matrix {P} such that {P^{-1}AP = D}, where {D=\mathrm{diag}(\lambda_1, \dots, \lambda_n) = \sum_j \lambda_j E_j }. Let {I_j = PE_jP^{-1}}. Then

\displaystyle \sum_j \lambda_j I_j = P\left(\sum_j \lambda_j E_j\right) P^{-1} = PDP^{-1} = A.

Moreover, for {j \neq l} we have {I_jI_l = PE_jE_lP^{-1} = 0}.

For the second part, notice that if the matrix {A} is decomposed in the manner described above, the numbers {\lambda_j} are necessarily eigenvalues of {A}. Indeed, multiplying the equality {\sum I_j = I} by {I_l} and using that {I_lI_j = 0} when {l \neq j}, we find that {I_l^2=I_l}. Hence, let {v \in \mathbf R^n} be any nonzero vector. Since {\sum_j I_j v = v}, at least one of the terms in the sum is nonzero, say {I_l v \neq 0}. Then

\displaystyle AI_lv = \sum_j \lambda_j I_j I_lv = \lambda_l I_l^2v = \lambda_l I_lv,

and therefore {I_lv} is an eigenvector of {A} with eigenvalue {\lambda_l}. Thus, it is impossible for the matrix {A} to have such a decomposition if, say, it has no real eigenvalues, for example

\displaystyle A=\left(\begin{array}{ll} 0 & -1 \\ 1 & 0 \end{array}\right).

Continue reading