Gaussian Elimination

Let's say we have a system of equations,

and we want to solve for , , and . Well, one way to do this is with Gaussian Elimination, which you may have encountered before in a math class or two.

The first step is to transform the system of equations into a matrix by using the coefficients in front of each variable, where each row corresponds to another equation and each column corresponds to an independent variable like , , or . For the previous system of equations, this might look like this:

Or more simply:

At first, translating the set of equations into a matrix like this doesn't seem to help with anything, so let's think of this in another way.

Row Echelon Form

Instead of the complicated mess of equations shown above, imagine if the system looked like this:

Then we could just solve for and plug that value in to the top two equations to solve for and through a process known as back-substitution. In matrix form, this set of equations would look like this:

This matrix form has a particular name: Row Echelon Form. Basically, any matrix can be considered in row echelon form if the leading coefficient or pivot (the first non-zero element in every row when reading from left to right) is right of the pivot of the row above it.

This creates a matrix that sometimes resembles an upper-triangular matrix; however, that doesn't mean that all row-echelon matrices are upper-triangular. For example, all of the following matrices are in row echelon form:

The first two of these have the right dimensions to find a solution to a system of equations; however, the last two matrices are respectively under- and over-constrained, meaning they do not provide an appropriate solution to a system of equations. That said, this doesn't mean that every matrix in the correct form can be solved either. For example, if you translate the second matrix into a system of equations again, the last row translates into , which is a contradiction. This is due to the fact that the matrix is singular, and there are no solutions to this particular system. Nevertheless, all of these matrices are in row echelon form.

Reduced Row Echelon Form

Row echelon form is nice, but wouldn't it be even better if our system of equations looked simply like this:

Then we would know exactly what , , and are without any fuss! In matrix form, it looks like this:

This introduces yet another matrix configuration: Reduced Row Echelon Form. A matrix is in reduced row echelon form if it satisfies the following conditions:

  1. It is in row echelon form.
  2. Every pivot is 1 and is the only nonzero entry in its column.

All the following examples are in the reduced row echelon form:

Again, only the first of these (the one that looks like an identity matrix) is desirable in the context of solving a system of equations, but transforming any matrix in this form gives us an immediate and definitive answer at the question: can I solve my system of equations?

Beyond solving a system of equations, reshaping a matrix in this form makes it very easy to deduce other properties of the matrix, such as its rank — the maximum number of linearly independent columns. In reduced row echelon form, the rank is simply the number of pivots.

For now, I hope the motivation is clear: we want to convert a matrix into row echelon and then reduced row echelon form to make large systems of equations trivial to solve, so we need some method to do that. In general, the term Gaussian Elimination refers to the process of transforming a matrix into row echelon form, and the process of transforming a row echelon matrix into reduced row echelon form is called Gauss-Jordan Elimination. That said, the notation here is sometimes inconsistent. Several authors use the term Gaussian Elimination to include Gauss-Jordan elimination as well. In addition, the process of Gauss-Jordan elimination is sometimes called Back-substitution, which is also confusing because the term can also be used to mean solving a system of equations from row echelon form, without simplifying to reduced row echelon form. For this reason, we will be using the following definitions in this chapter:

  • Gaussian Elimination: The process of transforming a matrix into row echelon form
  • Gauss-Jordan Elimination: The process of transforming a row echelon matrix into reduced row echelon form
  • Back-substitution: The process of directly solving a row echelon matrix, without transforming into reduced row echelon form

The Analytical Method

Gaussian elimination is inherently analytical and can be done by hand for small systems of equations; however, for large systems, this (of course) become tedious and we will need to find an appropriate numerical solution. For this reason, I have split this section into two parts. One will cover the analytical framework, and the other will cover an algorithm you can write in your favorite programming language.

In the end, reducing large systems of equations boils down to a game you play on a seemingly random matrix with 3 possible moves. You can:

  1. Swap any two rows.
  2. Multiply any row by a non-zero scale value.
  3. Add any row to a multiple of any other row.

That's it. Before continuing, I suggest you try to recreate the row echelon matrix we made above. That is, do the following:

There are plenty of different strategies you could use to do this, and no one strategy is better than the rest. One method is to subtract a multiple of the top row from subsequent rows below it such that all values beneath the pivot value are zero. This process might be easier if you swap some rows around first and can be performed for each pivot.

After you get a row echelon matrix, the next step is to find the reduced row echelon form. In other words, we do the following:

Here, the idea is similar to above and the same rules apply. In this case, we might start from the right-most column and subtracts upwards instead of downwards.

The Computational Method

The analytical method for Gaussian Elimination may seem straightforward, but the computational method does not obviously follow from the "game" we were playing before. Ultimately, the computational method boils down to two separate steps and has a complexity of .

As a note, this process iterates through all the rows in the provided matrix. When we say "current row" (curr_row), we mean the specific row iteration number we are on at that time, and as before, the "pivot" corresponds to the first non-zero element in that row.

Step 1

For each element in the pivot column under the current row, find the highest value and switch the row with the highest value with the current row. The pivot is then considered to be the first element in the highest swapped row.

For example, in this case the highest value is :

After finding this value, we simply switch the row with the to the current row:

In this case, the new pivot is .

In code, this process might look like this:

# finding the maximum element for each column
max_index = argmax(abs.(A[row:end,col])) + row-1

# Check to make sure matrix is good!
if (A[max_index, col] == 0)
    println("matrix is singular!")
    continue
end

# swap row with highest value for that column to the top
temp_vector = A[max_index, :]
A[max_index, :] = A[row, :]
A[row, :] = temp_vector
// finding the maximum element
for (int i = row + 1; i < row; i++) {
    if (Math.abs(a[i][col]) > Math.abs(a[pivot][col])) {
        pivot = i;
    }
}

if (a[pivot][col] == 0) {
    System.err.println("The matrix is singular");
    continue;
}

if (row != pivot) {
    // Swap the row with the highest valued element
    // with the current row
    swapRow(a, col, pivot);
}
void swap_rows(double *a, const size_t i, const size_t pivot,
               const size_t cols) {

    for (size_t j = 0; j < cols; ++j) {
        double tmp = a[i * cols + j];
        a[i * cols + j] = a[pivot * cols + j];
        a[pivot * cols + j] = tmp;
    }
}
size_t pivot = row;

for (size_t i = row + 1; i < rows; ++i) {
    if (fabs(a[i * cols + col]) > fabs(a[pivot * cols + col])) {
        pivot = i;
    }
}

if (a[pivot * cols + col] == 0) {
    printf("The matrix is singular.\n");
    continue;
}

if (col != pivot) {
    swap_rows(a, col, pivot, cols);
}
  std::size_t pivot = i;

for (std::size_t j = i + 1; j < rows; j++) {
  if (fabs(eqns[j][i]) > fabs(eqns[pivot][i])) pivot = j;
}

if (eqns[pivot][i] == 0.0)
  continue;  // But continuing to simplify the matrix as much as possible

if (i != pivot)  // Swapping the rows if new row with higher maxVals is found
  std::swap(eqns[pivot], eqns[i]);  // C++ swap function
swapRows :: Int -> Int -> Matrix a -> Matrix a
swapRows r1 r2 m
  | r1 == r2 = m
  | otherwise =
    m //
    concat [[((r2, c), m ! (r1, c)), ((r1, c), m ! (r2, c))] | c <- [c1 .. cn]]
  where
    ((_, c1), (_, cn)) = bounds m
(target, pivot) =
  maximumBy (compare `on` abs . snd) [(k, m ! (k, c)) | k <- [r .. rn]]
m' = swapRows r target m
let pivot = row;
for (let i = row + 1; i < rows; ++i) {
  if (Math.abs(a[i][col]) > Math.abs(a[pivot][col])) {
    pivot = i;
  }
}

if (a[pivot][col] === 0) {
  console.log("The matrix is singular.");
  continue;
}

if (col !== pivot) {
  const t = a[col];
  a[col] = a[pivot];
  a[pivot] = t;
}
// 1. find highest value in column below row to be pivot
p, highest := r, 0.
for i, row := range a[r:] {
    if abs := math.Abs(row[c]); abs > highest {
        p = r + i
        highest = abs
    }
}
highest = a[p][c] // correct sign

if highest == 0. {
    if !singular {
        singular = true
        fmt.Println("This matrix is singular.")
    }
    continue
}
temp = A[pivot_row, :].copy()
A[pivot_row, :] = A[max_i, :]
A[max_i, :] = temp

# Skip on singular matrix,  not actually a pivot
if A[pivot_row, pivot_col] == 0:
    continue
    // find the maximum element for this column
    let mut max_row = k;
    let mut max_value = a[(k, k)].abs();
    for row in (k + 1)..a.rows {
        if max_value < a[(row, k)].abs() {
            max_value = a[(row, k)].abs();
            max_row = row;
        }
    }

    // Check to make sure the matrix is good
    if a[(max_row, k)] == 0.0 {
        println!("Matrix is singular, aborting");
        return;
    }

    // swap the row with the highest value for this kumn to the top
    a.swap_rows(k, max_row);

    // Loop over all remaining rows
    for i in k + 1..a.rows {
        // find the fraction
        let fraction = a[(i, k)] / a[(k, k)];

        // Loop through all columns for that row
        for j in (k + 1)..a.cols {
            // re-evaluate each element
            a[(i, j)] -= a[(k, j)] * fraction;
        }

        // set lower elements to 0
        a[(i, k)] = 0.0;
    }
}

As a note, if the highest value is , the matrix is singular and the system has no single solution. This makes sense because if the highest value in a column is 0, the entire column must be 0, thus there can be no unique solution when we read the matrix as a set of equations. That said, Gaussian elimination is more general and allows us to continue, even if the matrix is not necessarily solvable as a set of equations. Feel free to exit after finding a if your end-goal is to solve a system of equations.

Step 2

For the row beneath the current pivot row and within the pivot column, find a fraction that corresponds to the ratio of the value in that column to the pivot, itself. After this, subtract the current pivot row multiplied by the fraction from each corresponding row element. This process essentially subtracts an optimal multiple of the current row from each row underneath (similar to Step 3 from the above game). Ideally, this should always create a 0 under the current row's pivot value.

For example, in this matrix, the next row is and the pivot value is , so the fraction is .

After finding the fraction, we simply subtract , like so:

After this, repeat the process for all other rows.

Here is what it might look like in code:

# Loop for all remaining rows
for i = (row+1):rows

    # finding fraction
    fraction = A[i,col]/A[row,col]

    # loop through all columns for that row
    for j = (col+1):cols

         # re-evaluate each element
         A[i,j] -= A[row,j]*fraction

    end
for (int i = row + 1; i < rows; i++) {
    // finding the inverse
    double scale = a[i][col] / a[row][col];
    // loop through all columns in current row
    for (int j = col + 1; j < cols; j++) {

        // Subtract rows
        a[i][j] -= a[row][j] * scale;
    }
for (size_t i = row + 1; i < rows; ++i) {
    double scale = a[i * cols + col] / a[row * cols + col];

    for (size_t j = col + 1; j < cols; ++j) {
        a[i * cols + j] -= a[row * cols + j] * scale;
    }
for (std::size_t j = i + 1; j < rows; j++) {
  double scale = eqns[j][i] / eqns[i][i];

  for (std::size_t k = i + 1; k < cols; k++)   // k doesn't start at 0, since
    eqns[j][k] -= scale * eqns[i][k];  // values before from 0 to i
                                       // are already 0
  eqns[j][i] = 0.0;
}
subRows ::
     Fractional a
  => (Int, Int) -- pivot location
  -> (Int, Int) -- rows to cover
  -> (Int, Int) -- columns to cover
  -> Matrix a
  -> Matrix a
subRows (r, c) (r1, rn) (c1, cn) m =
  accum
    (-)
    m
    [ ((i, j), m ! (i, c) * m ! (r, j) / m ! (r, c))
    | i <- [r1 .. rn]
    , j <- [c1 .. cn]
    ]
| otherwise = go (r + 1, c + 1) $ subRows (r, c) (r + 1, rn) (c, cn) m'
for (let i = row + 1; i < rows; ++i) {
  const scale = a[i][col] / a[row][col];

  for (let j = col + 1; j < cols; ++j) {
    a[i][j] -= a[row][j] * scale;
  }
for _, row := range a[r+1:] {
    // 3. find fraction from pivot value
    frac := row[c] / highest

    // 4. subtract row to set rest of column to zero
    for j := range row {
        row[j] -= frac * a[r][j]
    }

    // 5. ensure col goes to zero (no float rounding)
    row[c] = 0.
}
# Zero out elements below pivot
for r in range(pivot_row + 1,  A.shape[0]):
    # Get fraction
    frac = -A[r, pivot_col] / A[pivot_row, pivot_col]
    # Add rows
    A[r, :] += frac * A[pivot_row, :]
// Loop over all remaining rows
for i in k + 1..a.rows {
    // find the fraction
    let fraction = a[(i, k)] / a[(k, k)];

    // Loop through all columns for that row
    for j in (k + 1)..a.cols {
        // re-evaluate each element
        a[(i, j)] -= a[(k, j)] * fraction;
    }

    // set lower elements to 0
    a[(i, k)] = 0.0;
}

All together

When we put everything together, it looks like this:

function gaussian_elimination!(A::Array{Float64,2})

    rows = size(A,1)
    cols = size(A,2)

    # Row index
    row = 1

    # Main loop going through all columns
    for col = 1:(cols-1)

        # finding the maximum element for each column
        max_index = argmax(abs.(A[row:end,col])) + row-1

        # Check to make sure matrix is good!
        if (A[max_index, col] == 0)
            println("matrix is singular!")
            continue
        end

        # swap row with highest value for that column to the top
        temp_vector = A[max_index, :]
        A[max_index, :] = A[row, :]
        A[row, :] = temp_vector

        # Loop for all remaining rows
        for i = (row+1):rows

            # finding fraction
            fraction = A[i,col]/A[row,col]

            # loop through all columns for that row
            for j = (col+1):cols

                 # re-evaluate each element
                 A[i,j] -= A[row,j]*fraction

            end

            # Set lower elements to 0
            A[i,col] = 0
        end
        row += 1
    end
end
void gaussian_elimination(double *a, const size_t rows, const size_t cols) {
    size_t row = 0;

    for (size_t col = 0; col < cols - 1; ++col) {
        size_t pivot = row;

        for (size_t i = row + 1; i < rows; ++i) {
            if (fabs(a[i * cols + col]) > fabs(a[pivot * cols + col])) {
                pivot = i;
            }
        }

        if (a[pivot * cols + col] == 0) {
            printf("The matrix is singular.\n");
            continue;
        }

        if (col != pivot) {
            swap_rows(a, col, pivot, cols);
        }

        for (size_t i = row + 1; i < rows; ++i) {
            double scale = a[i * cols + col] / a[row * cols + col];

            for (size_t j = col + 1; j < cols; ++j) {
                a[i * cols + j] -= a[row * cols + j] * scale;
            }

            a[i * cols + col] = 0;
        }

        row++;
    }
}
void gaussianElimination(std::vector<std::vector<double> > &eqns) {
  // 'eqns' is the matrix, 'rows' is no. of vars
  std::size_t rows = eqns.size(), cols = eqns[0].size();

  for (std::size_t i = 0; i < rows - 1; i++) {
      std::size_t pivot = i;

    for (std::size_t j = i + 1; j < rows; j++) {
      if (fabs(eqns[j][i]) > fabs(eqns[pivot][i])) pivot = j;
    }

    if (eqns[pivot][i] == 0.0)
      continue;  // But continuing to simplify the matrix as much as possible

    if (i != pivot)  // Swapping the rows if new row with higher maxVals is found
      std::swap(eqns[pivot], eqns[i]);  // C++ swap function

    for (std::size_t j = i + 1; j < rows; j++) {
      double scale = eqns[j][i] / eqns[i][i];

      for (std::size_t k = i + 1; k < cols; k++)   // k doesn't start at 0, since
        eqns[j][k] -= scale * eqns[i][k];  // values before from 0 to i
                                           // are already 0
      eqns[j][i] = 0.0;
    }
  }
}
swapRows :: Int -> Int -> Matrix a -> Matrix a
swapRows r1 r2 m
  | r1 == r2 = m
  | otherwise =
    m //
    concat [[((r2, c), m ! (r1, c)), ((r1, c), m ! (r2, c))] | c <- [c1 .. cn]]
  where
    ((_, c1), (_, cn)) = bounds m

subRows ::
     Fractional a
  => (Int, Int) -- pivot location
  -> (Int, Int) -- rows to cover
  -> (Int, Int) -- columns to cover
  -> Matrix a
  -> Matrix a
subRows (r, c) (r1, rn) (c1, cn) m =
  accum
    (-)
    m
    [ ((i, j), m ! (i, c) * m ! (r, j) / m ! (r, c))
    | i <- [r1 .. rn]
    , j <- [c1 .. cn]
    ]

gaussianElimination :: (Fractional a, Ord a) => Matrix a -> Matrix a
gaussianElimination mat = go (r1, c1) mat
def gaussian_elimination(A):

    pivot_row = 0

    # Go by column
    for pivot_col in range(min(A.shape[0], A.shape[1])):

        # Swap row with highest element in col
        max_i = np.argmax(abs(A[pivot_row:, pivot_col])) + pivot_row

        temp = A[pivot_row, :].copy()
        A[pivot_row, :] = A[max_i, :]
        A[max_i, :] = temp

        # Skip on singular matrix,  not actually a pivot
        if A[pivot_row, pivot_col] == 0:
            continue

        # Zero out elements below pivot
        for r in range(pivot_row + 1,  A.shape[0]):
            # Get fraction
            frac = -A[r, pivot_col] / A[pivot_row, pivot_col]
            # Add rows
            A[r, :] += frac * A[pivot_row, :]

        pivot_row += 1
static void gaussianElimination(double[][] a) {
    int row = 0;

    int rows = a.length;
    int cols = a[0].length;

    for (int col = 0; col < cols - 1; col++) {
        int pivot = row;

        // finding the maximum element
        for (int i = row + 1; i < row; i++) {
            if (Math.abs(a[i][col]) > Math.abs(a[pivot][col])) {
                pivot = i;
            }
        }

        if (a[pivot][col] == 0) {
            System.err.println("The matrix is singular");
            continue;
        }

        if (row != pivot) {
            // Swap the row with the highest valued element
            // with the current row
            swapRow(a, col, pivot);
        }

        for (int i = row + 1; i < rows; i++) {
            // finding the inverse
            double scale = a[i][col] / a[row][col];
            // loop through all columns in current row
            for (int j = col + 1; j < cols; j++) {

                // Subtract rows
                a[i][j] -= a[row][j] * scale;
            }

            // Set lower elements to 0
            a[i][col] = 0;
        }
        row++;
    }
}
function gaussianElimination(a) {
  const rows = a.length
  const cols = a[0].length
  let row = 0;
  for (let col = 0; col < cols - 1; ++col) {

    let pivot = row;
    for (let i = row + 1; i < rows; ++i) {
      if (Math.abs(a[i][col]) > Math.abs(a[pivot][col])) {
        pivot = i;
      }
    }

    if (a[pivot][col] === 0) {
      console.log("The matrix is singular.");
      continue;
    }

    if (col !== pivot) {
      const t = a[col];
      a[col] = a[pivot];
      a[pivot] = t;
    }

    for (let i = row + 1; i < rows; ++i) {
      const scale = a[i][col] / a[row][col];

      for (let j = col + 1; j < cols; ++j) {
        a[i][j] -= a[row][j] * scale;
      }

      a[i][col] = 0;
    }

    ++row;
  }
  return a;
}
func gaussianElimination(a [][]float64) {
    singular := false
    rows := len(a)
    cols := len(a[0])

    for c, r := 0, 0; c < cols && r < rows; c++ {
        // 1. find highest value in column below row to be pivot
        p, highest := r, 0.
        for i, row := range a[r:] {
            if abs := math.Abs(row[c]); abs > highest {
                p = r + i
                highest = abs
            }
        }
        highest = a[p][c] // correct sign

        if highest == 0. {
            if !singular {
                singular = true
                fmt.Println("This matrix is singular.")
            }
            continue
        }

        // 2. swap pivot with current row
        if p != r {
            a[r], a[p] = a[p], a[r]
        }

        for _, row := range a[r+1:] {
            // 3. find fraction from pivot value
            frac := row[c] / highest

            // 4. subtract row to set rest of column to zero
            for j := range row {
                row[j] -= frac * a[r][j]
            }

            // 5. ensure col goes to zero (no float rounding)
            row[c] = 0.
        }

        r++
    }
}
fn gaussian_elimination(a: &mut Matrix) {
    for k in 0..min(a.cols, a.rows) {
        // find the maximum element for this column
        let mut max_row = k;
        let mut max_value = a[(k, k)].abs();
        for row in (k + 1)..a.rows {
            if max_value < a[(row, k)].abs() {
                max_value = a[(row, k)].abs();
                max_row = row;
            }
        }

        // Check to make sure the matrix is good
        if a[(max_row, k)] == 0.0 {
            println!("Matrix is singular, aborting");
            return;
        }

        // swap the row with the highest value for this kumn to the top
        a.swap_rows(k, max_row);

        // Loop over all remaining rows
        for i in k + 1..a.rows {
            // find the fraction
            let fraction = a[(i, k)] / a[(k, k)];

            // Loop through all columns for that row
            for j in (k + 1)..a.cols {
                // re-evaluate each element
                a[(i, j)] -= a[(k, j)] * fraction;
            }

            // set lower elements to 0
            a[(i, k)] = 0.0;
        }
    }
}

To be clear: if the matrix is found to be singular during this process, the system of equations is either over- or under-determined and no general solution exists. For this reason, many implementations of this method will stop the moment the matrix is found to have no unique solutions. In this implementation, we allowed for the more general case and opted to simply output when the matrix is singular instead. If you intend to solve a system of equations, then it makes sense to stop the method the moment you know there is no unique solution, so some small modification of this code might be necessary!

So what do we do from here? Well, we continue reducing the matrix; however, there are two ways to do this:

  1. Reduce the matrix further into reduced row echelon form with Gauss-Jordan elimination
  2. Solve the system directly with back-substitution if the matrix allows for such solutions

Let's start with Gauss-Jordan Elimination and then back-substitution

Gauss-Jordan Elimination

Gauss-Jordan Elimination is precisely what we said above; however, in this case, we often work from the bottom-up instead of the top-down. We basically need to find the pivot of every row and set that value to 1 by dividing the entire row by the pivot value. Afterwards, we subtract upwards until all values above the pivot are 0 before moving on to the next column from right to left (instead of left to right, like before). Here it is in code:

function gauss_jordan_elimination!(A::Array{Float64,2})

    rows = size(A,1)
    cols = size(A,2)


    # After this, we know what row to start on (r-1)
    # to go back through the matrix
    row = 1
    for col = 1:cols-1
        if (A[row, col] != 0)

            # divide row by pivot and leaving pivot as 1
            for i = cols:-1:col
                A[row,i] /= A[row,col]
            end

            # subtract value from above row and set values above pivot to 0
            for i = 1:row-1
                for j = cols:-1:col
                    A[i,j] -= A[i,col]*A[row,j]
                end
            end
            row += 1
        end
    end
end
void gauss_jordan(double *a, const size_t cols) {
    size_t row = 0;

    for (size_t col = 0; col < cols - 1; ++col) {
        if (a[row * cols + col] != 0) {
            for (size_t i = cols - 1; i > col - 1; --i) {
                a[row * cols + i] /= a[row * cols + col];
            }

            for (size_t i = 0; i < row; ++i) {
                for (size_t j = cols - 1; j > col - 1; --j) {
                    a[i * cols + j] -= a[i * cols + col] * a[row * cols + j];
                }
            }

            row++;
        }
    }
}
void gaussJordan(std::vector<std::vector<double> > &eqns) {
  // 'eqns' is the (Row-echelon) matrix, 'rows' is no. of vars
  std::size_t rows = eqns.size();

  for (std::size_t i = rows - 1; i < rows; i--) {

    if (eqns[i][i] != 0) {

      eqns[i][rows] /= eqns[i][i];
      eqns[i][i] = 1;  // We know that the only entry in this row is 1

      // subtracting rows from below
      for (std::size_t j = i - 1; j < i; j--) {
        eqns[j][rows] -= eqns[j][i] * eqns[i][rows];
        eqns[j][i] = 0;  // We also set all the other values in row to 0 directly
      }
    }
  }
}
((r1, c1), (rn, cn)) = bounds mat
go (r, c) m
  | c == cn = m
  | pivot == 0 = go (r, c + 1) m
  | otherwise = go (r + 1, c + 1) $ subRows (r, c) (r + 1, rn) (c, cn) m'
  where
    (target, pivot) =
      maximumBy (compare `on` abs . snd) [(k, m ! (k, c)) | k <- [r .. rn]]
    m' = swapRows r target m
# Assumes A is already row echelon form
def gauss_jordan_elimination(A):

    col = 0

    # Scan for pivots
    for row in range(A.shape[0]):
        while col < A.shape[1] and A[row, col] == 0:
            col += 1

        if col >= A.shape[1]:
            continue

        # Set each pivot to one via row scaling
        A[row, :] /= A[row, col]

        # Zero out elements above pivot
        for r in range(row):
            A[r, :] -= A[r, col] * A[row, :]
static void gaussJordan(double[][] a) {
    int row = 0;

    int cols = a[0].length;

    for (int col = 0; col < cols - 1; col++) {
        if (a[row][col] != 0) {
            for (int i = cols - 1; i > col - 1; i--) {
                // divide row by pivot so the pivot is set to 1
                a[row][i] /= a[row][col];
            }

            // subtract the value form above row and set values above pivot to 0
            for (int i = 0; i < row; i++) {
                for (int j = cols - 1; j > col - 1; j--) {
                    a[i][j] -= a[i][col] * a[row][j];
                }
            }
            row++;
        }
    }
}
function gaussJordan(a) {
  const cols = a[0].length;
  let row = 0;

  for (let col = 0; col < cols - 1; ++col) {
    if (a[row][col] !== 0) {
      for (let i = cols - 1; i > col - 1; --i) {
        a[row][i] /= a[row][col];
      }

      for (let i = 0; i < row; ++i) {
        for (let j = cols - 1; j > col - 1; --j) {
          a[i][j] -= a[i][col] * a[row][j];
        }
      }

      ++row;
    }
  }
}
func gaussJordan(a [][]float64) {
    for r := len(a) - 1; r >= 0; r-- {
        // Find pivot col
        p := -1
        for c, cell := range a[r] {
            if cell != 0. {
                p = c
                break
            }
        }
        if p < 0 {
            continue
        }

        // Scale pivot r to 1.
        scale := a[r][p]
        for c := range a[r][p:] {
            a[r][p+c] /= scale
        }
        // Subtract pivot row from each row above
        for _, row := range a[:r] {
            scale = row[p]
            for c, cell := range a[r][p:] {
                row[p+c] -= cell * scale
            }
        }
    }
}
fn gauss_jordan(a: &mut Matrix) {
    let mut row = 0;
    for k in 0..(a.cols - 1) {
        if a[(row, k)] != 0.0 {
            for i in (k..a.cols).rev() {
                a[(row, i)] /= a[(row, k)];
            }

            for i in 0..row {
                for j in (k..a.cols).rev() {
                    a[(i, j)] -= a[(i, k)] * a[(row, j)];
                }
            }

            row += 1;
        }
    }
}

As a note: Gauss-Jordan elimination can also be used to find the inverse of a matrix by following the same procedure to generate a reduced row echelon matrix, but with an identity matrix on the other side instead of the right-hand side of each equation. This process is straightforward but will not be covered here, simply because there are much faster numerical methods to find an inverse matrix; however, if you would like to see this, let me know and I can add it in for completeness.

Back-substitution

The idea of back-substitution is straightforward: we create a matrix of solutions and iteratively solve for each variable by plugging in all variables before it. For example, if our matrix looks like this:

We can quickly solve for , and then use that to solve for by plugging in for . After that, we simply need to solve for in a similar fashion. In code, this involves keeping a rolling sum of all the values we substitute, subtracting that sum from the solution column and then dividing by the coefficient variable. In code, it looks like this:

function back_substitution(A::Array{Float64,2})

    rows = size(A,1)
    cols = size(A,2)

    # Creating the solution Vector
    soln = zeros(rows)

    for i = rows:-1:1
        sum = 0.0
        for j = rows:-1:i
            sum += soln[j]*A[i,j]
        end
        soln[i] = (A[i, cols] - sum) / A[i, i]
    end

    return soln
end
void back_substitution(const double *a, double *x, const int rows,
                       const int cols) {

    for (int i = rows - 1; i >= 0; --i) {
        double sum = 0.0;

        for (int j = cols - 2; j > i; --j) {
            sum += x[j] * a[i * cols + j];
        }

        x[i] = (a[i * cols + cols - 1] - sum) / a[i * cols + i];
    }
}
std::vector<double> backSubs(const std::vector<std::vector<double> > &eqns) {
  // 'eqns' is matrix, 'rows' is no. of variables
  std::size_t rows = eqns.size();

  std::vector<double> ans(rows);
  for (std::size_t i = rows - 1; i < rows; i--) {
    double sum = 0.0;

    for (std::size_t j = i + 1; j < rows; j++) sum += eqns[i][j] * ans[j];

    if (eqns[i][i] != 0)
      ans[i] = (eqns[i][rows] - sum) / eqns[i][i];
    else
      return std::vector<double>(0);
  }
  return ans;
}
fn back_substitution(a: &Matrix) -> Vec<f64> {
    let mut soln = vec![0.0; a.rows];

    soln[a.rows - 1] = a[(a.rows - 1, a.cols - 1)] / a[(a.rows - 1, a.cols - 2)];

    for i in (0..a.rows - 1).rev() {
        let mut sum = 0.0;
        for j in (i..a.rows).rev() {
            sum += soln[j] * a[(i, j)];
        }
        soln[i] = (a[(i, a.cols - 1)] - sum) / a[(i, i)];
    }

    soln
}
gaussJordan :: (Fractional a, Eq a) => Matrix a -> Matrix a
gaussJordan mat = go (r1, c1) mat
  where
    ((r1, c1), (rn, cn)) = bounds mat
    go (r, c) m
      | c == cn = m
# Assumes A has a unique solution and A in row echelon form
def back_substitution(A):

    sol = np.zeros(A.shape[0]).T

    # Go by pivots along diagonal
    for pivot_i in range(A.shape[0] - 1,  -1,  -1):
        s = 0
        for col in range(pivot_i + 1,  A.shape[1] - 1):
            s += A[pivot_i, col] * sol[col]
        sol[pivot_i] = (A[pivot_i, A.shape[1] - 1] - s) / A[pivot_i, pivot_i]

    return sol
static double[] backSubstitution(double[][] a) {
    int rows = a.length;
    int cols = a[0].length;

    double[] solution = new double[rows];

    for (int i = rows - 1; i >= 0; i--) {
        double sum = 0;

        for (int j = cols - 2; j > i; j--) {
            sum += solution[j] * a[i][j];
        }
        solution[i] = (a[i][cols - 1] - sum) / a[i][i];
    }
    return solution;
}
function backSubstitution(a) {
  const rows = a.length;
  const cols = a[0].length;
  const sol = [];

  for (let i = rows - 1; i >= 0; --i) {

    let sum = 0;
    for (let j = cols - 2; j > i; --j) {
      sum += sol[j] * a[i][j];
    }

    sol[i] = (a[i][cols - 1] - sum) / a[i][i];
  }
  return sol;
}
func backSubstitution(a [][]float64) []float64 {
    rows := len(a)
    cols := len(a[0])
    x := make([]float64, rows)
    for r := rows - 1; r >= 0; r-- {
        sum := 0.

        for c := cols - 2; c > r; c-- {
            sum += x[c] * a[r][c]
        }

        x[r] = (a[r][cols-1] - sum) / a[r][r]
    }
    return x
}

Visual Representation

We have thus far used Gaussian elimination as a method to solve a system of equations; however, there is often a much easier way to find a similar solution simply by plotting each row in our matrix. For the case of 2 equations and 2 unknowns, we would plot the two lines corresponding to each equation and the location of their point of intersection would be the solution for and . Similarly, for the case of 3 equations and 3 unknowns, we would plot 3 planes and the location of their point of intersection would be the solution for , , and .

What, then, is the point of Gaussian elimination if we can simply plot our set of equations to find a solution? Well, this analogy breaks down quickly when we start moving beyond 3D, so it is obvious we need some method to deal with higher-dimensional systems. That said, it is particularly interesting to see what happens as we plot our matrix during Gaussian elimination for the 3D case.

As we can see in the above visualization, the planes wobble about in 3D until they reach row echelon form, where one plane is parallel to the and axes. At this point, it's trivial to find the -coordinate for the solution because it's simply the intercept of the parallel plane. From there, the matrices become even easier to interpret as they move to the reduced row echelon form. In this form, the solution is simply the , , and intercepts of the appropriate planes.

This visualization might have been obvious for some readers, but I found it particularly enlightening at first. By performing Gaussian elimination, we are manipulating our planes such that they can be interpreted at a glance -- which is precisely the same thing we are doing with the matrix interpretation!

Conclusions

And with that, we have two possible ways to reduce our system of equations and find a solution. If we are sure our matrix is not singular and that a solution exists, it's fastest to use back-substitution to find our solution. If no solution exists or we are trying to find a reduced row echelon matrix, then Gauss-Jordan elimination is best. As we said at the start, the notation for Gaussian Elimination is rather ambiguous in the literature, so we are hoping that the definitions provided here are clear and consistent enough to cover all the bases.

As for what's next... Well, we are in for a treat! The above algorithm clearly has 3 for loops and has a complexity of , which is abysmal! If we can reduce the matrix to a specifically tridiagonal matrix, we can actually solve the system in ! How? Well, we can use an algorithm known as the Tri-Diagonal Matrix Algorithm (TDMA) also known as the Thomas Algorithm.

There are also plenty of other solvers that do similar things that we will get to in due time.

Video Explanation

Here's a video describing Gaussian elimination:

Example Code

function gaussian_elimination!(A::Array{Float64,2})

    rows = size(A,1)
    cols = size(A,2)

    # Row index
    row = 1

    # Main loop going through all columns
    for col = 1:(cols-1)

        # finding the maximum element for each column
        max_index = argmax(abs.(A[row:end,col])) + row-1

        # Check to make sure matrix is good!
        if (A[max_index, col] == 0)
            println("matrix is singular!")
            continue
        end

        # swap row with highest value for that column to the top
        temp_vector = A[max_index, :]
        A[max_index, :] = A[row, :]
        A[row, :] = temp_vector

        # Loop for all remaining rows
        for i = (row+1):rows

            # finding fraction
            fraction = A[i,col]/A[row,col]

            # loop through all columns for that row
            for j = (col+1):cols

                 # re-evaluate each element
                 A[i,j] -= A[row,j]*fraction

            end

            # Set lower elements to 0
            A[i,col] = 0
        end
        row += 1
    end
end

function back_substitution(A::Array{Float64,2})

    rows = size(A,1)
    cols = size(A,2)

    # Creating the solution Vector
    soln = zeros(rows)

    for i = rows:-1:1
        sum = 0.0
        for j = rows:-1:i
            sum += soln[j]*A[i,j]
        end
        soln[i] = (A[i, cols] - sum) / A[i, i]
    end

    return soln
end


function gauss_jordan_elimination!(A::Array{Float64,2})

    rows = size(A,1)
    cols = size(A,2)


    # After this, we know what row to start on (r-1)
    # to go back through the matrix
    row = 1
    for col = 1:cols-1
        if (A[row, col] != 0)

            # divide row by pivot and leaving pivot as 1
            for i = cols:-1:col
                A[row,i] /= A[row,col]
            end

            # subtract value from above row and set values above pivot to 0
            for i = 1:row-1
                for j = cols:-1:col
                    A[i,j] -= A[i,col]*A[row,j]
                end
            end
            row += 1
        end
    end
end

function main()
    A = [2. 3 4 6;
         1 2 3 4;
         3 -4 0 10]

    gaussian_elimination!(A)
    println(A)

    gauss_jordan_elimination!(A)
    println(A)

    soln = back_substitution(A)
    println(soln)

end

main()
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void swap_rows(double *a, const size_t i, const size_t pivot,
               const size_t cols) {

    for (size_t j = 0; j < cols; ++j) {
        double tmp = a[i * cols + j];
        a[i * cols + j] = a[pivot * cols + j];
        a[pivot * cols + j] = tmp;
    }
}

void gaussian_elimination(double *a, const size_t rows, const size_t cols) {
    size_t row = 0;

    for (size_t col = 0; col < cols - 1; ++col) {
        size_t pivot = row;

        for (size_t i = row + 1; i < rows; ++i) {
            if (fabs(a[i * cols + col]) > fabs(a[pivot * cols + col])) {
                pivot = i;
            }
        }

        if (a[pivot * cols + col] == 0) {
            printf("The matrix is singular.\n");
            continue;
        }

        if (col != pivot) {
            swap_rows(a, col, pivot, cols);
        }

        for (size_t i = row + 1; i < rows; ++i) {
            double scale = a[i * cols + col] / a[row * cols + col];

            for (size_t j = col + 1; j < cols; ++j) {
                a[i * cols + j] -= a[row * cols + j] * scale;
            }

            a[i * cols + col] = 0;
        }

        row++;
    }
}

void back_substitution(const double *a, double *x, const int rows,
                       const int cols) {

    for (int i = rows - 1; i >= 0; --i) {
        double sum = 0.0;

        for (int j = cols - 2; j > i; --j) {
            sum += x[j] * a[i * cols + j];
        }

        x[i] = (a[i * cols + cols - 1] - sum) / a[i * cols + i];
    }
}

void gauss_jordan(double *a, const size_t cols) {
    size_t row = 0;

    for (size_t col = 0; col < cols - 1; ++col) {
        if (a[row * cols + col] != 0) {
            for (size_t i = cols - 1; i > col - 1; --i) {
                a[row * cols + i] /= a[row * cols + col];
            }

            for (size_t i = 0; i < row; ++i) {
                for (size_t j = cols - 1; j > col - 1; --j) {
                    a[i * cols + j] -= a[i * cols + col] * a[row * cols + j];
                }
            }

            row++;
        }
    }
}

int main() {
    double a[3][4] = {{3.0, 2.0, -4.0, 3.0},
                      {2.0, 3.0, 3.0, 15.0},
                      {5.0, -3.0, 1.0, 14.0}};

    gaussian_elimination((double *)a, 3, 4);

    printf("Gaussian elimination:\n");
    for (size_t i = 0; i < 3; ++i) {
        printf("[");
        for (size_t j = 0; j < 4; ++j) {
            printf("%f ", a[i][j]);
        }
        printf("]\n");
    }

    printf("\nGauss-Jordan:\n");

    gauss_jordan((double *)a, 4);

    for (size_t i = 0; i < 3; ++i) {
        printf("[");
        for (size_t j = 0; j < 4; ++j) {
            printf("%f ", a[i][j]);
        }
        printf("]\n");
    }

    printf("\nSolutions are:\n");

    double x[3] = {0, 0, 0};
    back_substitution((double *)a, x, 3, 4);

    printf("(%f,%f,%f)\n", x[0], x[1], x[2]);

    return 0;
}
#include <algorithm>
#include <cmath>
#include <iomanip>
#include <iostream>
#include <vector>


void gaussianElimination(std::vector<std::vector<double> > &eqns) {
  // 'eqns' is the matrix, 'rows' is no. of vars
  std::size_t rows = eqns.size(), cols = eqns[0].size();

  for (std::size_t i = 0; i < rows - 1; i++) {
      std::size_t pivot = i;

    for (std::size_t j = i + 1; j < rows; j++) {
      if (fabs(eqns[j][i]) > fabs(eqns[pivot][i])) pivot = j;
    }

    if (eqns[pivot][i] == 0.0)
      continue;  // But continuing to simplify the matrix as much as possible

    if (i != pivot)  // Swapping the rows if new row with higher maxVals is found
      std::swap(eqns[pivot], eqns[i]);  // C++ swap function

    for (std::size_t j = i + 1; j < rows; j++) {
      double scale = eqns[j][i] / eqns[i][i];

      for (std::size_t k = i + 1; k < cols; k++)   // k doesn't start at 0, since
        eqns[j][k] -= scale * eqns[i][k];  // values before from 0 to i
                                           // are already 0
      eqns[j][i] = 0.0;
    }
  }
}

void gaussJordan(std::vector<std::vector<double> > &eqns) {
  // 'eqns' is the (Row-echelon) matrix, 'rows' is no. of vars
  std::size_t rows = eqns.size();

  for (std::size_t i = rows - 1; i < rows; i--) {

    if (eqns[i][i] != 0) {

      eqns[i][rows] /= eqns[i][i];
      eqns[i][i] = 1;  // We know that the only entry in this row is 1

      // subtracting rows from below
      for (std::size_t j = i - 1; j < i; j--) {
        eqns[j][rows] -= eqns[j][i] * eqns[i][rows];
        eqns[j][i] = 0;  // We also set all the other values in row to 0 directly
      }
    }
  }
}

std::vector<double> backSubs(const std::vector<std::vector<double> > &eqns) {
  // 'eqns' is matrix, 'rows' is no. of variables
  std::size_t rows = eqns.size();

  std::vector<double> ans(rows);
  for (std::size_t i = rows - 1; i < rows; i--) {
    double sum = 0.0;

    for (std::size_t j = i + 1; j < rows; j++) sum += eqns[i][j] * ans[j];

    if (eqns[i][i] != 0)
      ans[i] = (eqns[i][rows] - sum) / eqns[i][i];
    else
      return std::vector<double>(0);
  }
  return ans;
}


void printMatrix(const std::vector<std::vector<double> > &matrix) {
  for (std::size_t row = 0; row < matrix.size(); row++) {
    std::cout << "[";

    for (std::size_t col = 0; col < matrix[row].size() - 1; col++)
      std::cout << std::setw(8) << std::fixed << std::setprecision(3)
                << matrix[row][col];

    std::cout << " |" << std::setw(8) << std::fixed << std::setprecision(3)
              << matrix[row].back() << " ]" << std::endl;
  }
}


int main() {
  std::vector<std::vector<double> > equations{
      {2, 3, 4, 6},
      {1, 2, 3, 4},
      {3, -4, 0, 10}};

  std::cout << "Initial matrix:" << std::endl;
  printMatrix(equations);
  std::cout << std::endl;

  gaussianElimination(equations);
  std::cout << "Matrix after gaussian elimination:" << std::endl;
  printMatrix(equations);
  std::cout << std::endl;

  std::vector<double> ans = backSubs(equations);
  std::cout << "Solution from backsubstitution" << std::endl;
  std::cout << "x = " << ans[0] << ", y = " << ans[1] << ", z = " << ans[2]
            << std::endl
            << std::endl;

  gaussJordan(equations);
  std::cout << "Matrix after Gauss Jordan:" << std::endl;
  printMatrix(equations);
  std::cout << std::endl;
}
// submitted by jess 3jane

use std::cmp::min;
use std::ops::{Index, IndexMut};

pub struct Matrix {
    rows: usize,
    cols: usize,
    data: Vec<f64>,
}

impl Matrix {
    fn new(rows: usize, cols: usize, data: &[f64]) -> Matrix {
        Matrix {
            rows,
            cols,
            data: data.to_vec(),
        }
    }

    fn swap_rows(&mut self, a: usize, b: usize) {
        for col in 0..self.cols {
            self.data.swap(a * self.cols + col, b * self.cols + col);
        }
    }
}

impl Index<(usize, usize)> for Matrix {
    type Output = f64;
    fn index(&self, (row, col): (usize, usize)) -> &f64 {
        &self.data[row * self.cols + col]
    }
}

impl IndexMut<(usize, usize)> for Matrix {
    fn index_mut(&mut self, (row, col): (usize, usize)) -> &mut f64 {
        &mut self.data[row * self.cols + col]
    }
}

fn gaussian_elimination(a: &mut Matrix) {
    for k in 0..min(a.cols, a.rows) {
        // find the maximum element for this column
        let mut max_row = k;
        let mut max_value = a[(k, k)].abs();
        for row in (k + 1)..a.rows {
            if max_value < a[(row, k)].abs() {
                max_value = a[(row, k)].abs();
                max_row = row;
            }
        }

        // Check to make sure the matrix is good
        if a[(max_row, k)] == 0.0 {
            println!("Matrix is singular, aborting");
            return;
        }

        // swap the row with the highest value for this kumn to the top
        a.swap_rows(k, max_row);

        // Loop over all remaining rows
        for i in k + 1..a.rows {
            // find the fraction
            let fraction = a[(i, k)] / a[(k, k)];

            // Loop through all columns for that row
            for j in (k + 1)..a.cols {
                // re-evaluate each element
                a[(i, j)] -= a[(k, j)] * fraction;
            }

            // set lower elements to 0
            a[(i, k)] = 0.0;
        }
    }
}

fn gauss_jordan(a: &mut Matrix) {
    let mut row = 0;
    for k in 0..(a.cols - 1) {
        if a[(row, k)] != 0.0 {
            for i in (k..a.cols).rev() {
                a[(row, i)] /= a[(row, k)];
            }

            for i in 0..row {
                for j in (k..a.cols).rev() {
                    a[(i, j)] -= a[(i, k)] * a[(row, j)];
                }
            }

            row += 1;
        }
    }
}

fn back_substitution(a: &Matrix) -> Vec<f64> {
    let mut soln = vec![0.0; a.rows];

    soln[a.rows - 1] = a[(a.rows - 1, a.cols - 1)] / a[(a.rows - 1, a.cols - 2)];

    for i in (0..a.rows - 1).rev() {
        let mut sum = 0.0;
        for j in (i..a.rows).rev() {
            sum += soln[j] * a[(i, j)];
        }
        soln[i] = (a[(i, a.cols - 1)] - sum) / a[(i, i)];
    }

    soln
}

fn main() {
    // The example matrix from the text
    let mut a = Matrix::new(
        3,
        4,
        &vec![2.0, 3.0, 4.0, 6.0, 1.0, 2.0, 3.0, 4.0, 3.0, -4.0, 0.0, 10.0],
    );

    gaussian_elimination(&mut a);
    gauss_jordan(&mut a);
    let soln = back_substitution(&a);
    println!("Solution: {:?}", soln);
}
import Data.Array
import Data.Function (on)
import Data.List (intercalate, maximumBy)
import Data.Ratio

type Matrix a = Array (Int, Int) a

type Vector a = Array Int a

swapRows :: Int -> Int -> Matrix a -> Matrix a
swapRows r1 r2 m
  | r1 == r2 = m
  | otherwise =
    m //
    concat [[((r2, c), m ! (r1, c)), ((r1, c), m ! (r2, c))] | c <- [c1 .. cn]]
  where
    ((_, c1), (_, cn)) = bounds m

subRows ::
     Fractional a
  => (Int, Int) -- pivot location
  -> (Int, Int) -- rows to cover
  -> (Int, Int) -- columns to cover
  -> Matrix a
  -> Matrix a
subRows (r, c) (r1, rn) (c1, cn) m =
  accum
    (-)
    m
    [ ((i, j), m ! (i, c) * m ! (r, j) / m ! (r, c))
    | i <- [r1 .. rn]
    , j <- [c1 .. cn]
    ]

gaussianElimination :: (Fractional a, Ord a) => Matrix a -> Matrix a
gaussianElimination mat = go (r1, c1) mat
  where
    ((r1, c1), (rn, cn)) = bounds mat
    go (r, c) m
      | c == cn = m
      | pivot == 0 = go (r, c + 1) m
      | otherwise = go (r + 1, c + 1) $ subRows (r, c) (r + 1, rn) (c, cn) m'
      where
        (target, pivot) =
          maximumBy (compare `on` abs . snd) [(k, m ! (k, c)) | k <- [r .. rn]]
        m' = swapRows r target m

gaussJordan :: (Fractional a, Eq a) => Matrix a -> Matrix a
gaussJordan mat = go (r1, c1) mat
  where
    ((r1, c1), (rn, cn)) = bounds mat
    go (r, c) m
      | c == cn = m
      | m ! (r, c) == 0 = go (r, c + 1) m
      | otherwise = go (r + 1, c + 1) $ subRows (r, c) (r1, r - 1) (c, cn) m'
      where
        m' = accum (/) m [((r, j), m ! (r, c)) | j <- [c .. cn]]

backSubstitution :: (Fractional a) => Matrix a -> Vector a
backSubstitution m = sol
  where
    ((r1, _), (rn, cn)) = bounds m
    sol =
      listArray (r1, rn) [(m ! (r, cn) - sum' r) / m ! (r, r) | r <- [r1 .. rn]]
    sum' r = sum [m ! (r, k) * sol ! k | k <- [r + 1 .. rn]]

printM :: (Show a) => Matrix a -> String
printM m =
  let ((r1, c1), (rn, cn)) = bounds m
   in unlines
        [ intercalate "\t" [show $ m ! (r, c) | c <- [c1 .. cn]]
        | r <- [r1 .. rn]
        ]

printV :: (Show a) => Vector a -> String
printV = unlines . map show . elems

main :: IO ()
main = do
  let mat = [2, 3, 4, 6, 1, 2, 3, 4, 3, -4, 0, 10] :: [Ratio Int]
      m = listArray ((1, 1), (3, 4)) mat
  putStrLn "Original Matrix:"
  putStrLn $ printM m
  putStrLn "Echelon form"
  putStrLn $ printM $ gaussianElimination m
  putStrLn "Reduced echelon form"
  putStrLn $ printM $ gaussJordan $ gaussianElimination m
  putStrLn "Solution from back substitution"
  putStrLn $ printV $ backSubstitution $ gaussianElimination m
import numpy as np

def gaussian_elimination(A):

    pivot_row = 0

    # Go by column
    for pivot_col in range(min(A.shape[0], A.shape[1])):

        # Swap row with highest element in col
        max_i = np.argmax(abs(A[pivot_row:, pivot_col])) + pivot_row

        temp = A[pivot_row, :].copy()
        A[pivot_row, :] = A[max_i, :]
        A[max_i, :] = temp

        # Skip on singular matrix,  not actually a pivot
        if A[pivot_row, pivot_col] == 0:
            continue

        # Zero out elements below pivot
        for r in range(pivot_row + 1,  A.shape[0]):
            # Get fraction
            frac = -A[r, pivot_col] / A[pivot_row, pivot_col]
            # Add rows
            A[r, :] += frac * A[pivot_row, :]

        pivot_row += 1


# Assumes A is already row echelon form
def gauss_jordan_elimination(A):

    col = 0

    # Scan for pivots
    for row in range(A.shape[0]):
        while col < A.shape[1] and A[row, col] == 0:
            col += 1

        if col >= A.shape[1]:
            continue

        # Set each pivot to one via row scaling
        A[row, :] /= A[row, col]

        # Zero out elements above pivot
        for r in range(row):
            A[r, :] -= A[r, col] * A[row, :]


# Assumes A has a unique solution and A in row echelon form
def back_substitution(A):

    sol = np.zeros(A.shape[0]).T

    # Go by pivots along diagonal
    for pivot_i in range(A.shape[0] - 1,  -1,  -1):
        s = 0
        for col in range(pivot_i + 1,  A.shape[1] - 1):
            s += A[pivot_i, col] * sol[col]
        sol[pivot_i] = (A[pivot_i, A.shape[1] - 1] - s) / A[pivot_i, pivot_i]

    return sol


def main():
    A = np.array([[2, 3, 4, 6],
                  [1, 2, 3, 4,],
                  [3, -4, 0, 10]], dtype=float)

    print("Original")
    print(A, "\n")

    gaussian_elimination(A)
    print("Gaussian elimination")
    print(A, "\n")

    print("Back subsitution")
    print(back_substitution(A), "\n")

    gauss_jordan_elimination(A)
    print("Gauss-Jordan")
    print(A, "\n")


if __name__ == "__main__":
    main()
import java.util.Arrays;

public class GaussianElimination {

    static void gaussianElimination(double[][] a) {
        int row = 0;

        int rows = a.length;
        int cols = a[0].length;

        for (int col = 0; col < cols - 1; col++) {
            int pivot = row;

            // finding the maximum element
            for (int i = row + 1; i < row; i++) {
                if (Math.abs(a[i][col]) > Math.abs(a[pivot][col])) {
                    pivot = i;
                }
            }

            if (a[pivot][col] == 0) {
                System.err.println("The matrix is singular");
                continue;
            }

            if (row != pivot) {
                // Swap the row with the highest valued element
                // with the current row
                swapRow(a, col, pivot);
            }

            for (int i = row + 1; i < rows; i++) {
                // finding the inverse
                double scale = a[i][col] / a[row][col];
                // loop through all columns in current row
                for (int j = col + 1; j < cols; j++) {

                    // Subtract rows
                    a[i][j] -= a[row][j] * scale;
                }

                // Set lower elements to 0
                a[i][col] = 0;
            }
            row++;
        }
    }

    static void gaussJordan(double[][] a) {
        int row = 0;

        int cols = a[0].length;

        for (int col = 0; col < cols - 1; col++) {
            if (a[row][col] != 0) {
                for (int i = cols - 1; i > col - 1; i--) {
                    // divide row by pivot so the pivot is set to 1
                    a[row][i] /= a[row][col];
                }

                // subtract the value form above row and set values above pivot to 0
                for (int i = 0; i < row; i++) {
                    for (int j = cols - 1; j > col - 1; j--) {
                        a[i][j] -= a[i][col] * a[row][j];
                    }
                }
                row++;
            }
        }
    }

    static double[] backSubstitution(double[][] a) {
        int rows = a.length;
        int cols = a[0].length;

        double[] solution = new double[rows];

        for (int i = rows - 1; i >= 0; i--) {
            double sum = 0;

            for (int j = cols - 2; j > i; j--) {
                sum += solution[j] * a[i][j];
            }
            solution[i] = (a[i][cols - 1] - sum) / a[i][i];
        }
        return solution;
    }

    static void swapRow(double[][] a, int rowA, int rowB) {
        double[] temp = a[rowA];
        a[rowA] = a[rowB];
        a[rowB] = temp;
    }

    public static void main(String[] args) {
        double[][] a = {
            { 3, 2, -4, 3 },
            { 2, 3, 3, 15 },
            { 5, -3, 1, 14 }
        };

        gaussianElimination(a);
        System.out.println("Gaussian elimination:");
        Arrays.stream(a).forEach(x -> System.out.println(Arrays.toString(x)));

        gaussJordan(a);
        System.out.println("\nGauss-Jordan:");
        Arrays.stream(a).forEach(x -> System.out.println(Arrays.toString(x)));

        System.out.println("\nSolutions:");
        System.out.println(Arrays.toString(backSubstitution(a)));
    }
}
function gaussianElimination(a) {
  const rows = a.length
  const cols = a[0].length
  let row = 0;
  for (let col = 0; col < cols - 1; ++col) {

    let pivot = row;
    for (let i = row + 1; i < rows; ++i) {
      if (Math.abs(a[i][col]) > Math.abs(a[pivot][col])) {
        pivot = i;
      }
    }

    if (a[pivot][col] === 0) {
      console.log("The matrix is singular.");
      continue;
    }

    if (col !== pivot) {
      const t = a[col];
      a[col] = a[pivot];
      a[pivot] = t;
    }

    for (let i = row + 1; i < rows; ++i) {
      const scale = a[i][col] / a[row][col];

      for (let j = col + 1; j < cols; ++j) {
        a[i][j] -= a[row][j] * scale;
      }

      a[i][col] = 0;
    }

    ++row;
  }
  return a;
}

function backSubstitution(a) {
  const rows = a.length;
  const cols = a[0].length;
  const sol = [];

  for (let i = rows - 1; i >= 0; --i) {

    let sum = 0;
    for (let j = cols - 2; j > i; --j) {
      sum += sol[j] * a[i][j];
    }

    sol[i] = (a[i][cols - 1] - sum) / a[i][i];
  }
  return sol;
}

function gaussJordan(a) {
  const cols = a[0].length;
  let row = 0;

  for (let col = 0; col < cols - 1; ++col) {
    if (a[row][col] !== 0) {
      for (let i = cols - 1; i > col - 1; --i) {
        a[row][i] /= a[row][col];
      }

      for (let i = 0; i < row; ++i) {
        for (let j = cols - 1; j > col - 1; --j) {
          a[i][j] -= a[i][col] * a[row][j];
        }
      }

      ++row;
    }
  }
}

function printMatrixRow(row) {
  const text = row
    .map(v => (v < 0 ? " " : "  ") + v.toPrecision(8))
    .join("");

  console.log(text);
}

function printMatrix(a) {
  for (const row of a) {
    printMatrixRow(row);
  }
}

const a = [
  [3,  2, -4,  3],
  [2,  3,  3, 15],
  [5, -3,  1, 14]
];

gaussianElimination(a);
console.log("Gaussian elimination:");
printMatrix(a);

gaussJordan(a);
console.log("\nGauss-Jordan:");
printMatrix(a);

const sol = backSubstitution(a);
console.log("\nSolutions are:");
printMatrixRow(sol);
// Package demonstrates Gaussian Elimination
package main

import (
    "fmt"
    "math"
)

func gaussianElimination(a [][]float64) {
    singular := false
    rows := len(a)
    cols := len(a[0])

    for c, r := 0, 0; c < cols && r < rows; c++ {
        // 1. find highest value in column below row to be pivot
        p, highest := r, 0.
        for i, row := range a[r:] {
            if abs := math.Abs(row[c]); abs > highest {
                p = r + i
                highest = abs
            }
        }
        highest = a[p][c] // correct sign

        if highest == 0. {
            if !singular {
                singular = true
                fmt.Println("This matrix is singular.")
            }
            continue
        }

        // 2. swap pivot with current row
        if p != r {
            a[r], a[p] = a[p], a[r]
        }

        for _, row := range a[r+1:] {
            // 3. find fraction from pivot value
            frac := row[c] / highest

            // 4. subtract row to set rest of column to zero
            for j := range row {
                row[j] -= frac * a[r][j]
            }

            // 5. ensure col goes to zero (no float rounding)
            row[c] = 0.
        }

        r++
    }
}

func gaussJordan(a [][]float64) {
    for r := len(a) - 1; r >= 0; r-- {
        // Find pivot col
        p := -1
        for c, cell := range a[r] {
            if cell != 0. {
                p = c
                break
            }
        }
        if p < 0 {
            continue
        }

        // Scale pivot r to 1.
        scale := a[r][p]
        for c := range a[r][p:] {
            a[r][p+c] /= scale
        }
        // Subtract pivot row from each row above
        for _, row := range a[:r] {
            scale = row[p]
            for c, cell := range a[r][p:] {
                row[p+c] -= cell * scale
            }
        }
    }
}

func backSubstitution(a [][]float64) []float64 {
    rows := len(a)
    cols := len(a[0])
    x := make([]float64, rows)
    for r := rows - 1; r >= 0; r-- {
        sum := 0.

        for c := cols - 2; c > r; c-- {
            sum += x[c] * a[r][c]
        }

        x[r] = (a[r][cols-1] - sum) / a[r][r]
    }
    return x
}

func printMatrixRow(row []float64) {
    fmt.Print("[")
    for _, cell := range row {
        fmt.Printf("%9.4f ", cell)
    }
    fmt.Println("]")
}

func printMatrix(a [][]float64) {
    for _, row := range a {
        printMatrixRow(row)
    }
    fmt.Println()
}

func main() {
    a := [][]float64{
        {2, 3, 4, 6},
        {1, 2, 3, 4},
        {3, -4, 0, 10},
    }
    fmt.Println("Original Matrix:")
    printMatrix(a)

    fmt.Println("Gaussian elimination:")
    gaussianElimination(a)
    printMatrix(a)

    gaussJordan(a)
    fmt.Println("Gauss-Jordan:")
    printMatrix(a)

    fmt.Println("Solutions are:")
    x := backSubstitution(a)
    printMatrixRow(x)
}

License

Code Examples

The code examples are licensed under the MIT license (found in LICENSE.md).

Text

The text of this chapter was written by James Schloss and is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.

Images/Graphics
Pull Requests

After initial licensing (#560), the following pull requests have modified the text or graphics of this chapter:

  • none

results matching ""

    No results matching ""