MATH 257

Uncategorized
21k words

MATH257显然要更简单一点点

………………………………………………………………………………………………………………………………

………………………………………………………………………………………………………………………………

………………………………………………………………………………………………………………………………

………………………………………………………………………………………………………………………………

Note for MATH 257

1 Introduction to Vectors

1.1 Vectors

  • We use to present an vector
  • Linear Combinations: is a typical linear combination of the vetors and

1.2 Lengths and Dot Products

  • Lengths:
  • Dot product: , and have same rows.
  • Angle: angle between and has

2 Matrics Elimination

2.1 Matrix

  • Rows and Columns: We called a 3 by 2 () matrix. With rows and columns.
  • Multiplication: If , then
  • Identity Matrix: ,

2.2 Elimination

    • : Upper Triangular System, All for .
    • : Lower Triangular System, All for .
    • : Elimination matrix, ,
    • In , we riqure and
    • Example:
  • Augmented Matrix:
  • Elimination of Augmented Matrix:
    • ,

3 Inverse & Transpose

3.1 Defination

  • Inverse
    • For a square matrix , inverse matrix or
    • If have an , then A is invertible / non-sigular, or A is sigular.
    • For square ,
  • Transpose
    • ’s row is same as ’s column

3.2 Calculate

  • Gauss-Jordan Elimination
    • For , make an Augmented Matrix with , make elimination to
    • Example:
      • Do Gauss Elimination to:
      • Do Jordan Elimination to:
      • So

4 Space

4.1 Definition

  • All linear combination of all vectors in Space(or Subspace) will still in Space(or subspace).
  • , We called is a subspace of , obviously.
  • The Space consists of all vectors with components
  • , for any ,. can be space or subspace.
  • Point in any Space and any Subspace

4.2 Span

  • The space is by all linear combination of these vectors, it must be or a subspace of
  • Example:

4.3 independence, basis, rank and dimension

  • Linear independent: for any with
  • basis: basis are linear independent. Basis is not unique.
  • rank
    • Defination: The rank of is the number of pivots. This number is / / . Also the dimension of and . ()
    • Pivots and Free variables:
      • For :
        • The numbers of Pivots =
        • The numbers of Free variables =
  • dimension: has dimension

4.4 Spaces of Matrix

  • Example:
  • : Column Space of A
  • : Row Space of A
  • : Nullspace of A
    • All solutions to
  • : Left Nullspace of A
    • All solutions to
  • For ,

4.5 Solve

  • First, solve
    • : Reduced Row Echelon form
      • has all pivots = , with zeros above and below
    • We can use Elimination or to solve it.
  • Then, solve

5 Orthogonality

to solve

5.1 Defination

  • Orthogonal vectors:
  • Orthogonal subspaces:

5.2 Projection

  • , the Projection matrix, used to project an vector onto another space.
  • 1-D Projection
    • Example:
  • n-D Projection
    • When is invertible
    • When is sigular
  • , error of projection

5.3 Application: Least Square

  • Example: for
    • Solution:
    • Then, solve it, and find

5.4 Orthogornal bases

  • Orthogonal basis: {}is called an orthonormal basis iff
  • Orthogonal matrix:
    • is a square, with all orthogonal bases

6 Determinants

6.1 Property

  • Only Square matrix() have determinants.
  • The determinants of A ( or ) means the “Volumn” of C(A).
    • For Example, you can think means the Area of a square.
  • is singular/non-invertible
  • is invertible
  • Row/Column exchange will reverse signs.
    • For example:

6.2 Calculate

  • : A row change matrix, with
  • With pivots:
  • Pivot Formula
  • Big Formula
    • , total
  • Cofactor Expansion
    • Cofactor: $ C_{ij}=(-1)^{i+j}\det((A’){ij}) (A’){ij} ij$.
    • Cofactor Expansion:
    • Example:

6.3 Application

  • Cramer’s Rule solves
    • For , We use , so
  • Inverse
    • We can put into and use Cramer’s Rule. In the end, we will get: $(A^{-1}){ij}=\dfrac{C{ji}}{\det(A)}A^{-1}=\dfrac{C^T}{\det(A)}$
    • Difect proof:
      • , which means:

7 Eigenvalues and Eigenvectors

7.1 Introduction

  • Defination: Eigenvector and Eigenvalue with
  • For the same and :
  • Calculate: and

7.2 Matrix Diagonalization

For with eigenvectors and eigenvalues

  • Use of diagonalization:
    • Example: Let , find
      • So

7.3 Differential equations

For

  • Identify:
  • Find of
  • Use , and then we get
    • Why?

Example: with

  • Identify:
  • Find of :
  • :
    • When , we get

7.4 Application

Higher order linear differential equations

Example:

  • , depend on initial condition

Markov Matrix

  • Defination: For matrix , . .
  • It is used for probability calculation in state transition.

7.5 Symmetric Matrix

  • Defination:
  • Symmetric
    • eigenvalues are real numbers.
    • eigenvectors can be chosen as orthonormal()
    • any eigenvectors has

7.6 Positive Definite Matrix (PD)

  • Defination: For Symmetric Matrix, It is a
  • Application
    • Example: always Positive?
      • , A is so

8 SVD: Singular Value Decomposition

8.1 SVD

  • For matrix find where:
    • : with orthonormal columns.(eigenvectors of )
    • : with non-negative numbers on diagonal are called singular values of A.( , is eigenvalues of or )
    • : with orthonormal columns.(eigenvectors of )
  • Example
    • Compute SVD for
      • non-zero eigenvalues:

8.2 Pseudoinverse

  • Defination: is with . Given the compact SVD, A=, we define persudoinverse
  • Example:
    • Application:
      • For invertible , solution to
      • for singular or unsolvable

8.3 PCA: Principle Component Analysis

  • Give
  • Step 1: Data Standardization. Substract the mean of each column so taht the data hax mean of 0
  • Step 2: Calculate Covariance Matrix after Standaardization
  • Step 3: Compute Eigenvalues / eigenvectors()
  • Step 4: Select eigenvector correspendily to the largest eigenvalue
    • It is
  • Step 5: Data Recast. Recase standardized data onto the principal component direction

QWQ


Comments