6 min read

(For more resources related to this topic, see here.)

Linear algebra

Linear algebra is an important branch of mathematics. The numpy.linalg package contains linear algebra functions. With this module, you can invert matrices, calculate eigenvalues, solve linear equations, and determine determinants, among other things.

Time for action – inverting matrices

The inverse of a matrix A in linear algebra is the matrix A-1, which when multiplied with the original matrix, is equal to the identity matrix I. This can be written, as A* A-1 = I.

The inv function in the numpy.linalg package can do this for us. Let’s invert an example matrix. To invert matrices, perform the following steps:

  1. We will create the example matrix with the mat.

    A = np.mat("0 1 2;1 0 3;4 -3 8") print "An", A

    The A matrix is printed as follows:

    A [[ 0 1 2] [ 1 0 3] [ 4 -3 8]]

  2. Now, we can see the inv function in action, using which we will invert the matrix.

    inverse = np.linalg.inv(A) print "inverse of An", inverse

    The inverse matrix is shown as follows:

    inverse of A [[-4.5 7. -1.5] [-2. 4. -1. ] [ 1.5 -2. 0.5]]

    If the matrix is singular or not square, a LinAlgError exception is raised. If you want, you can check the result manually. This is left as an exercise for the reader.

  3. Let’s check what we get when we multiply the original matrix with the result of the inv function:

    print "Checkn", A * inverse

    The result is the identity matrix, as expected.

    Check
    [[ 1. 0. 0.]
    [ 0. 1. 0.]
    [ 0. 0. 1.]]

What just happened?

We calculated the inverse of a matrix with the inv function of the numpy.linalg package. We checked, with matrix multiplication, whether this is indeed the inverse matrix.

import numpy as np
A = np.mat("0 1 2;1 0 3;4 -3 8")
print "An", A
inverse = np.linalg.inv(A)
print "inverse of An", inverse
print "Checkn", A * inverse

Solving linear systems

A matrix transforms a vector into another vector in a linear way. This transformation mathematically corresponds to a system of linear equations. The numpy.linalg function, solve, solves systems of linear equations of the form Ax = b; here A is a matrix, b can be 1D or 2D array, and x is an unknown variable. We will see the dot function in action. This function returns the dot product of two floating-point arrays.

Time for action – solving a linear system

Let’s solve an example of a linear system. To solve a linear system, perform the following steps:

  1. Let’s create the matrices A and b.

    iA = np.mat("1 -2 1;0 2 -8;-4 5 9")
    print "An", A
    b = np.array([0, 8, -9])
    print "bn", b

    The matrices A and b are shown as follows:

  2. Solve this linear system by calling the solve function.

    x = np.linalg.solve(A, b)
    print "Solution", x

    The following is the solution of the linear system:

    Solution [ 29. 16. 3.]

  3. Check whether the solution is correct with the dot function.

    print "Checkn", np.dot(A , x)

    The result is as expected:

    Check
    [[ 0. 8. -9.]]

What just happened?

We solved a linear system using the solve function from the NumPy linalg module and checked the solution with the dot function.

import numpy as np
A = np.mat("1 -2 1;0 2 -8;-4 5 9")
print "An", A
b = np.array([0, 8, -9])

print "bn", b
x = np.linalg.solve(A, b)
print "Solution", x
print "Checkn", np.dot(A , x)

Finding eigenvalues and eigenvectors

Eigenvalues are scalar solutions to the equation Ax = ax, where A is a two-dimensional matrix and x is a one-dimensional vector. Eigenvectors are vectors corresponding to eigenvalues. The eigvals function in the numpy.linalg package calculates eigenvalues. The eig function returns a tuple containing eigenvalues and eigenvectors.

Time for action – determining eigenvalues and eigenvectors

Let’s calculate the eigenvalues of a matrix. Perform the following steps to do so:

  1. Create a matrix as follows:

    A = np.mat("3 -2;1 0")
    print "An", A

    The matrix we created looks like the following:

    A
    [[ 3 -2]
    [ 1 0]]

  2. Calculate eigenvalues by calling the eig function.

    print "Eigenvalues", np.linalg.eigvals(A)

    The eigenvalues of the matrix are as follows:

    Eigenvalues [ 2. 1.]

  3. Determine eigenvalues and eigenvectors with the eig function. This function returns a tuple, where the first element contains eigenvalues and the second element contains corresponding Eigenvectors, arranged column-wise.

    eigenvalues, eigenvectors = np.linalg.eig(A)
    print "First tuple of eig", eigenvalues
    print "Second tuple of eign", eigenvectors

    The eigenvalues and eigenvectors will be shown as follows:

    First tuple of eig [ 2. 1.]
    Second tuple of eig
    [[ 0.89442719 0.70710678]
    [ 0.4472136 0.70710678]]

  4. Check the result with the dot function by calculating the right- and left-hand sides of the eigenvalues equation Ax = ax.

    for i in range(len(eigenvalues)):
    print "Left", np.dot(A, eigenvectors[:,i])
    print "Right", eigenvalues[i] * eigenvectors[:,i]
    print

    The output is as follows:

    Left [[ 1.78885438]
    [ 0.89442719]]
    Right [[ 1.78885438]
    [ 0.89442719]]
    Left [[ 0.70710678]
    [ 0.70710678]]
    Right [[ 0.70710678]
    [ 0.70710678]]

What just happened?

We found the eigenvalues and eigenvectors of a matrix with the eigvals and eig functions of the numpy.linalg module. We checked the result using the dot function .

import numpy as np
A = np.mat("3 -2;1 0")
print "An", A
print "Eigenvalues", np.linalg.eigvals(A)
eigenvalues, eigenvectors = np.linalg.eig(A)
print "First tuple of eig", eigenvalues
print "Second tuple of eign", eigenvectors
for i in range(len(eigenvalues)):
print "Left", np.dot(A, eigenvectors[:,i])
print "Right", eigenvalues[i] * eigenvectors[:,i]
print

Singular value decomposition

Singular value decomposition is a type of factorization that decomposes a matrix into a product of three matrices. The singular value decomposition is a generalization of the previously discussed eigenvalue decomposition. The svd function in the numpy.linalg package can perform this decomposition. This function returns three matrices – U, Sigma, and V – such that U and V are orthogonal and Sigma contains the singular values of the input matrix.

The asterisk denotes the Hermitian conjugate or the conjugate transpose.

Time for action – decomposing a matrix

It’s time to decompose a matrix with the singular value decomposition. In order to decompose a matrix, perform the following steps:

  1. First, create a matrix as follows:

    A = np.mat("4 11 14;8 7 -2")
    print "An", A

    The matrix we created looks like the following:

    A
    [[ 4 11 14]
    [ 8 7 -2]]

  2. Decompose the matrix with the svd function.

    U, Sigma, V = np.linalg.svd(A, full_matrices=False)
    print "U"
    print U
    print "Sigma"
    print Sigma
    print "V"
    print V

    The result is a tuple containing the two orthogonal matrices U and V on the left- and right-hand sides and the singular values of the middle matrix.

    [-0.31622777 0.9486833 ]]
    Sigma
    [ 18.97366596 9.48683298]
    V
    [[-0.33333333 -0.66666667 -0.66666667]
    [ 0.66666667 0.33333333 -0.66666667]]
    U
    [[-0.9486833 -0.31622777]

  3. We do not actually have the middle matrix—we only have the diagonal values. The other values are all 0. We can form the middle matrix with the diag function. Multiply the three matrices. This is shown, as follows:

    print "Productn", U * np.diag(Sigma) * V

    The product of the three matrices looks like the following:

    Product
    [[ 4. 11. 14.]
    [ 8. 7. -2.]]

What just happened?

We decomposed a matrix and checked the result by matrix multiplication. We used the svd function from the NumPy linalg module.

import numpy as np
A = np.mat("4 11 14;8 7 -2")
print "An", A
U, Sigma, V = np.linalg.svd(A, full_matrices=False)
print "U"
print U
print "Sigma"
print Sigma
print "V"
print V
print "Productn", U * np.diag(Sigma) * V

Pseudoinverse

The Moore-Penrose pseudoinverse of a matrix can be computed with the pinv function of the numpy.linalg module (visit http://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse). The pseudoinverse is calculated using the singular value decomposition. The inv function only accepts square matrices; the pinv function does not have this restriction.

LEAVE A REPLY

Please enter your comment!
Please enter your name here