• wonderlic tests
  • EXAM REVIEW
  • NCCCO Examination
  • Summary
  • Class notes
  • QUESTIONS & ANSWERS
  • NCLEX EXAM
  • Exam (elaborations)
  • Study guide
  • Latest nclex materials
  • HESI EXAMS
  • EXAMS AND CERTIFICATIONS
  • HESI ENTRANCE EXAM
  • ATI EXAM
  • NR AND NUR Exams
  • Gizmos
  • PORTAGE LEARNING
  • Ihuman Case Study
  • LETRS
  • NURS EXAM
  • NSG Exam
  • Testbanks
  • Vsim
  • Latest WGU
  • AQA PAPERS AND MARK SCHEME
  • DMV
  • WGU EXAM
  • exam bundles
  • Study Material
  • Study Notes
  • Test Prep

SOLUTIONS MANUAL FOR

Testbanks Dec 30, 2025 ★★★★☆ (4.0/5)
Loading...

Loading document viewer...

Page 0 of 0

Document Text

SOLUTIONS MANUAL FOR

by Finite Dimensional Linear Algebra Mark S. Gockenbach Michigan Technological University 1 / 4

Contents Errata for the first printing1

  • Fields and vector spaces5
  • 2.1 Fields .................................................. 5 2.2 Vectorspaces .............................................. 10 2.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Linearcombinationsandspanningsets................................ 16 2.5 Linearindependence .......................................... 18 2.6 Basisanddimension .......................................... 21 2.7 Propertiesofbases........................................... 29 2.8 Polynomial interpolation and the Lagrange basis . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.9 Continuous piecewise polynomial functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

  • Linear operators47
  • 3.1 Linearoperators ............................................ 47 3.2 Morepropertiesoflinearoperators.................................. 52 3.3 Isomorphicvectorspaces........................................ 56 3.4 Linearoperatorequations ....................................... 64 3.5 Existenceanduniquenessofsolutions ................................ 67 3.6 The fundamental theorem; inverse operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.7 Gaussianelimination.......................................... 81 3.8 Newton’smethod............................................ 82 3.9 Linearordinarydifferentialequations................................. 83 3.10Graphtheory.............................................. 85 3.11Codingtheory.............................................. 86 3.12Linearprogramming .......................................... 87

  • Determinants and eigenvalues 91
  • 4.1 Thedeterminantfunction ....................................... 91 4.2 Furtherpropertiesofthedeterminantfunction ........................... 95 4.3 Practical computation of det(A) ................................... 96 4.5 Eigenvaluesandthecharacteristicpolynomial............................ 100 4.6 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.7 Eigenvaluesoflinearoperators .................................... 108 4.8 SystemsoflinearODEs ........................................ 114 4.9 Integerprogramming.......................................... 116

  • The Jordan canonical form 117
  • 5.1 Invariantsubspaces........................................... 117 5.2 Generalizedeigenspaces ........................................ 122 5.3 Nilpotentoperators........................................... 129 5.4 TheJordancanonicalformofamatrix................................ 134 i 2 / 4

iiCONTENTS 5.5 Thematrixexponential ........................................ 145 5.6 Graphsandeigenvalues ........................................ 147

  • Orthogonality and best approximation149
  • 6.1 Normsandinnerproducts....................................... 149 6.2 Theadjointofalinearoperator.................................... 156 6.3 Orthogonal vectors and bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 6.4 Theprojectiontheorem ........................................ 165 6.5 TheGram-Schmidtprocess ...................................... 172 6.6 Orthogonal complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.7 Complexinnerproductspaces..................................... 181 6.8 Moreonpolynomialapproximation.................................. 184 6.9 TheenergyinnerproductandGalerkin’smethod.......................... 187 6.10Gaussianquadrature.......................................... 189 6.11TheHelmholtzdecomposition..................................... 191

  • The spectral theory of symmetric matrices193
  • 7.1 Thespectraltheoremforsymmetricmatrices ............................ 193 7.2 Thespectraltheoremfornormalmatrices .............................. 198 7.3 OptimizationandtheHessianmatrix................................. 202 7.4 Lagrangemultipliers .......................................... 204 7.5 Spectralmethodsfordifferentialequations.............................. 206

  • The singular value decomposition209
  • 8.1 IntroductiontotheSVD........................................ 209 8.2 TheSVDforgeneralmatrices..................................... 212 8.3 Solvingleast-squaresproblemsusingtheSVD............................ 216 8.4 TheSVDandlinearinverseproblems ................................ 219 8.5 TheSmithnormalformofamatrix ................................. 221

  • Matrix factorizations and numerical linear algebra223
  • 9.1 TheLUfactorization.......................................... 223 9.2 Partialpivoting............................................. 227 9.3 TheCholeskyfactorization ...................................... 229 9.4 Matrixnorms.............................................. 230 9.5 Thesensitivityoflinearsystemstoerrors .............................. 233 9.6 Numerical stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 9.7 Thesensitivityoftheleast-squaresproblem ............................. 238 9.8 TheQRfactorization.......................................... 240 9.9 Eigenvaluesandsimultaneousiteration................................ 242 9.10TheQRalgorithm ........................................... 245 10 Analysis in vector spaces247 10.1 Analysis inR n

............................................. 247

10.2 Infinite-dimensional vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 10.3Functionalanalysis........................................... 251 10.4Weakconvergence ........................................... 252 3 / 4

Errata for the first printing The following corrections will be made in the second printing of the text, expected in 2011. The solutions manual is written as if they have already been made.Page 65: Exercise 14:belongs in Section 2.7.Page 65: Exercise 16:should read “(cf. Exercise 2.3.21)”, not “(cf. Exercise 2.2.21)”.Page 71: Exercise 9 (b): Z 5 4 should beZ 4 5 .Page 72: Exercise 11:“overV” should be “overF”.Page 72: Exercise 15:“i=1,2,...,k” should be “j=1,2,...,k” (twice).Page 79: Exercise 1:“x 3= 2” should be “x3= 3”.Page 82: Exercise 14(a):“EachA iandB ihas degree 2n+ 1” should read “Ai,Bi∈P2n+1for alli= 0,1,...,n”.Page 100, Exercise 11:“K:C[a, b]→C[a, b]” should be “K:C[c, d]→C[a, b]” Page 114, Line 9:“L:F n →R m

” should be “L:F

n →F m ”.Page 115: Exercise 8:

S={(1,0,0),(0,1,0),(0,0,1)}X={(1,1,1),(0,1,1),(0,0,1)}.

should be

S={(1,0,0),(0,1,0),(0,0,1)},X={(1,1,1),(0,1,1),(0,0,1)}.

Page 116, Exercise 17(b):“F

mn ” should be “F mn ”.Page 121, Exercise 3:“T:R 4 →R 3

” should be “T:R

4 →R 4 ”.Page 124, Exercise 15:“T:X/ker(L)→R(U)” should be “T:X/ker(L)→R(L)”.

Page 124, Exercise 15:

T([x]) =T(x) for all [x]∈X/ker(L) should be T([x]) =L(x) for all [x]∈X/ker(L).

Page 129, Exercise 4(b):Period is missing at the end of the sentence.

Page 130, Exercise 8:L:Z 3 3 →Z 3 3

should readL:Z

3 5 →Z 3 5

Page 130, Exercise 13(b):“Tdefines ...” should be “Sdefines ...”.

Page 131, Exercise 15:“K:C[a, b]×C[c, d]→C[a, b]” should be “K:C[c, d]→C[a, b]”.

Page 138, Exercise 7(b):“define” should be “defined”.

Page 139, Exercise 12:In the last line, “sp{x

1,x2,...,xn}” should be “sp{x1,x2,...,xk}”.Page 139, Exercise 12:The proposed plan for the proof is not valid. Instead, the instructions should read: Choose vectorsx 1,...,xk∈Xsuch that{T(x1),...,T(x k)}is a basis forR(T), and choose a basis{y1,...,yff} for ker(T ). Prove that{x 1,...,xk,y1,...,yff}is a basis forX. (Hint: First show that ker(T)∩sp{x1,...,xk} is trivial.)

Page 140, Exercise 15:In the displayed equation,|A

iishould be|Aii|.Page 168:Definition 132 defines the adjacency matrix of a graph, not the incidence matrix (which is something different). The correct term (adjacency matrix) is used throughout the rest of the section. (Change “incidence” to “adjacency” in three places: the title of Section 3.10.1, Page 168 line -2, Page 169 line 1.)

Page 199, Equation (3.41d):“x

1,x2≤0” should be “x1,x2≥0”.

  • / 4

User Reviews

★★★★☆ (4.0/5 based on 1 reviews)
Login to Review
S
Student
May 21, 2025
★★★★☆

This document provided practical examples, which was a perfect resource for my project. Absolutely excellent!

Download Document

Buy This Document

$1.00 One-time purchase
Buy Now
  • Full access to this document
  • Download anytime
  • No expiration

Document Information

Category: Testbanks
Added: Dec 30, 2025
Description:

SOLUTIONS MANUAL FOR by Finite Dimensional Linear Algebra Mark S. Gockenbach Michigan Technological University Contents Errata for the first printing 1 2 Fields and vector spaces 5 2.1 Fields .......

Unlock Now
$ 1.00