# Gaussian Elimination

So, how exactly do we go about solving a system of linear equations? Well, one way is *Gaussian Elimination*, which you may have encountered before in a math class or two. The basic idea is that we take a system of equations,

and turn it into a matrix by using the coefficients in front of each variable

Or more simply:

Now, at first, this doesn't seem to help anything, so let's think of this in another way. Wouldn't it be great if the system of equations looked like this:

Then we could just solve for and plug that value in to the top two equations to solve for and ! In matrix form, it would look like this

This matrix form has a particular name: *Row Echelon Form*.
Basically, any matrix can be considered in row echelon form if

- All non-zero rows are above rows of all zeros
- The leading coefficient or
*pivot*(the first non-zero element in every row when reading from left to right) is right of the pivot of the row above it.

All the following examples are in the row echelon form:

The first two are probably the ones we are interested in, at the very least they have the right dimensions to solve a system of equations. The last two systems are either under- or over-constrained; however, if you translate the last row of second matrix into a system, you get , which is a contradiction. This is due to the fact that the matrix is singular, and there are no solutions to this particular system. Nevertheless, all of these matrices are in row echelon form.

Now, it seems obvious to point out that if we ignore the last column, row echelon form is an upper triangular matrix. This might not be important now, but it will play an important role in future discussions, so keep it buzzing in the back of your brain.

Now, row echelon form is nice, but wouldn't it be even better if our system of equations looked simply like this

Then we would know exactly what , , and are without any fuss! In matrix form, it looks like

And that's where we really want to get to for obvious reasons.
This introduces yet another matrix configuration: * Reduced Row Echelon Form*.
A matrix is in reduced row echelon form if it satisfies the following conditions:

- It is in row echelon form.
- Every pivot is 1 and is the only nonzero entry in its column.

All the following examples are in the reduced row echelon form:

Again, only the first one (the identity matrix looking guy) is desirable in the context of solving a system of equations, but transforming any matrix in this form gives us an immediate and definitive answer at the question: can I solve my system?

Beyond solving a system, reshaping a matrix in this form makes it very easy to deduce other properties of the matrix, such as the rank. The rank of a matrix is the maximal number of linearly independent columns, in reduced row echelon form, the rank is simply the number of pivots.

For now, I hope the motivation is clear: we want to convert a matrix into row echelon and then reduced row echelon form to make large systems of equations trivial to solve, so we need some method to do that.
In general, the term *Gaussian Elimination* refers to the process of transforming a matrix into row echelon form, and the process of transforming a row echelon matrix into reduced row echelon is called *Gauss-Jordan Elimination*.
That said, the notation here is sometimes inconsistent.
Several authors use the term *Gaussian Elimination* to include Gauss-Jordan elimination as well.
In addition, the process of Gauss-Jordan elimination is sometimes called *Back-substitution*, which is also confusing because the term can also be used to mean solving a system of equations from row echelon form, without simplifying to reduced row echelon form.
For this reason, we will be using the following definitions in this chapter:

**Gaussian Elimination:**The process of transforming a matrix into row echelon form**Gauss-Jordan Elimination:**The process of transforming a row echelon matrix into*reduced*row echelon form**Back-substitution:**The process of directly solving a row echelon matrix,*without transforming into reduced row echelon form*

## The Method

Here I should point out that Gaussian elimination makes sense from a purely analytical point of view. For small systems of equations, it's relatively straightforward to do this method by hand; however, for large systems, this (of course) become tedious and we will need to find an appropriate numerical solution. For this reason, I have split this section into two parts. One will cover the analytical framework, and the other will cover an algorithm you can write in your favorite programming language.

In the end, reducing large systems of equations boils down to a game you play on a seemingly random matrix where you have the following moves available:

- You can swap any two rows
- You can multiply any row by a non-zero scale value
- You can add any row to a multiple of any other row

That's it. Before continuing, I suggest you try to recreate the row echelon matrix we made above. That is, do the following:

There are plenty of different strategies you could use to do this, and no one strategy is better than the rest. Personally, I usually try to multiply each row in the matrix by different values and add rows together until the first column is all the same value, and then I subtract the first row from all subsequent rows. I then do the same thing for the following columns.

After you get an upper triangular matrix, the next step is diagonalizing to create the reduced row echelon form. In other words, we do the following:

Here, the idea is similar to above. The strategy is the same as before, but starts from the right-most column and subtracts upwards instead of downwards.

## The Algorithm

Now, the analytical method may seem straightforward, but the algorithm does not obviously follow from the game we were playing before, so we'll go through it step-by-step.

In general, do the following process:

For each column

`col`

, find the highest value If that value is , the matrix is singular and the system has no solutions. Feel free to exit here, but if we want to be as general as possible the algorithm can continue even in that case.Swap the row with the highest valued element with the current row.

- For all remaining rows, find a fraction that corresponds to the ratio of the lower value in that column to the central pivot (the one you swapped to the top)
- Set all values in the corresponding rows to be the value they were before the top row the fraction. This is essentially performing move 3 from above, except with an optimal multiplicative factor.
- Set the value of that row's pivot column to 0.

In code, this looks like:

```
function gaussian_elimination(A::Array{Float64,2})
rows = size(A,1)
cols = size(A,2)
# Row index
row = 1
# Main loop going through all columns
for col = 1:(cols-1)
# Step 1: finding the maximum element for each column
max_index = indmax(abs.(A[row:end,col])) + row-1
# Check to make sure matrix is good!
if (A[max_index, col] == 0)
println("matrix is singular!")
continue
end
# Step 2: swap row with highest value for that column to the top
temp_vector = A[max_index, :]
A[max_index, :] = A[row, :]
A[row, :] = temp_vector
# Loop for all remaining rows
for i = (row+1):rows
# Step 3: finding fraction
fraction = A[i,col]/A[row,col]
# loop through all columns for that row
for j = (col+1):cols
# Step 4: re-evaluate each element
A[i,j] -= A[row,j]*fraction
end
# Step 5: Set lower elements to 0
A[i,col] = 0
end
row += 1
end
end
```

```
void gaussian_elimination(double *a, const size_t rows, const size_t cols) {
size_t row = 0;
for (size_t col = 0; col < cols - 1; ++col) {
size_t pivot = row;
for (size_t i = row + 1; i < rows; ++i) {
if (fabs(a[i * cols + col]) > fabs(a[pivot * cols + col])) {
pivot = i;
}
}
if (a[pivot * cols + col] == 0) {
printf("The matrix is singular.\n");
continue;
}
if (col != pivot) {
swap_rows(a, col, pivot, cols);
}
for (size_t i = row + 1; i < rows; ++i) {
double scale = a[i * cols + col] / a[row * cols + col];
for (size_t j = col + 1; j < cols; ++j) {
a[i * cols + j] -= a[row * cols + j] * scale;
}
a[i * cols + col] = 0;
}
row++;
}
}
```

```
fn gaussian_elimination(a: &mut Matrix) {
for k in 0..min(a.cols, a.rows) {
// Step 1: find the maximum element for this column
let mut max_row = k;
let mut max_value = a[(k, k)].abs();
for row in (k + 1)..a.rows {
if max_value < a[(row, k)].abs() {
max_value = a[(row, k)].abs();
max_row = row;
}
}
// Check to make sure the matrix is good
if a[(max_row, k)] == 0.0 {
println!("Matrix is singular, aborting");
return;
}
// Step 2: swap the row with the highest value for this kumn to the top
a.swap_rows(k, max_row);
// Loop over all remaining rows
for i in k + 1..a.rows {
// Step 3: find the fraction
let fraction = a[(i, k)] / a[(k, k)];
// Loop through all columns for that row
for j in (k + 1)..a.cols {
// Step 4: re-evaluate each element
a[(i, j)] -= a[(k, j)] * fraction;
}
// Step 5: set lower elements to 0
a[(i, k)] = 0.0;
}
}
}
```

```
swapRows :: Int -> Int -> Matrix a -> Matrix a
swapRows r1 r2 m
| r1 == r2 = m
| otherwise =
m //
concat [[((r2, c), m ! (r1, c)), ((r1, c), m ! (r2, c))] | c <- [c1 .. cn]]
where
((_, c1), (_, cn)) = bounds m
subRows ::
Fractional a
=> (Int, Int) -- pivot location
-> (Int, Int) -- rows to cover
-> (Int, Int) -- columns to cover
-> Matrix a
-> Matrix a
subRows (r, c) (r1, rn) (c1, cn) m =
accum
(-)
m
[ ((i, j), m ! (i, c) * m ! (r, j) / m ! (r, c))
| i <- [r1 .. rn]
, j <- [c1 .. cn]
]
gaussianElimination :: (Fractional a, Ord a) => Matrix a -> Matrix a
gaussianElimination mat = go (r1, c1) mat
```

Now, to be clear: this algorithm creates an upper-triangular matrix.
In other words, it only creates a matrix in *row echelon form*, not * reduced row echelon form*.
If the matrix is found to be singular during this process, the system of equations is either over or under-determined and no general solution exists.
For this reason, many implementations of this method will stop the moment the matrix is found to be singular.
In this implementation, we allowed for the more general case and opted to simply output when the matrix is singular instead.
If you intend to solve a system of equations, then it makes sense to stop the method the moment you know there is no general solution, so some small modification might be necessary!

So what do we do from here? Well, we continue further reducing the matrix; however, there are two ways to do this:

- Reduce the matrix further into
*reduced*row echelon form with Gauss-Jordan elimination - Solve the system directly with
*back-substitution*if the matrix is allows for such solutions

Let's start with Gauss-Jordan Elimination and then back-substitution

## Gauss-Jordan Elimination

Gauss-Jordan Elimination is precisely what we said above. We basically need to find the pivot of every row and set that value to 1. Afterwards, we subtract upwards until all values above the pivot are 0 before moving on to the next column. Here it is in code:

```
function gauss_jordan_elimination(A::Array{Float64,2})
rows = size(A,1)
cols = size(A,2)
# After this, we know what row to start on (r-1)
# to go back through the matrix
row = 1
for col = 1:cols-1
if (A[row, col] != 0)
# divide row by pivot and leaving pivot as 1
for i = cols:-1:col
A[row,i] /= A[row,col]
end
# subtract value from above row and set values above pivot to 0
for i = 1:row-1
for j = cols:-1:col
A[i,j] -= A[i,col]*A[row,j]
end
end
row += 1
end
end
end
```

```
void gauss_jordan(double *a, const size_t rows, const size_t cols) {
int row = 0;
for (int col = 0; col < cols - 1; ++col) {
if (a[row * cols + col] != 0) {
for (int i = cols - 1; i > col - 1; --i) {
a[row * cols + i] /= a[row * cols + col];
}
for (int i = 0; i < row; ++i) {
for (int j = cols - 1; j > col - 1; --j) {
a[i * cols + j] -= a[i * cols + col] * a[row * cols + j];
}
}
row++;
}
}
}
```

This code does not exist yet in rust, so here's Julia code (sorry for the inconvenience)

```
function gauss_jordan_elimination(A::Array{Float64,2})
rows = size(A,1)
cols = size(A,2)
# After this, we know what row to start on (r-1)
# to go back through the matrix
row = 1
for col = 1:cols-1
if (A[row, col] != 0)
# divide row by pivot and leaving pivot as 1
for i = cols:-1:col
A[row,i] /= A[row,col]
end
# subtract value from above row and set values above pivot to 0
for i = 1:row-1
for j = cols:-1:col
A[i,j] -= A[i,col]*A[row,j]
end
end
row += 1
end
end
end
```

```
((r1, c1), (rn, cn)) = bounds mat
go (r, c) m
| c == cn = m
| pivot == 0 = go (r, c + 1) m
| otherwise = go (r + 1, c + 1) $ subRows (r, c) (r + 1, rn) (c, cn) m'
where
(target, pivot) =
maximumBy (compare `on` abs . snd) [(k, m ! (k, c)) | k <- [r .. rn]]
m' = swapRows r target m
```

## Back-substitution

The idea of back-substitution is straightforward: we create a matrix of solutions and iteratively solve for each variable by plugging in all variables before it. For example, if our matrix looks like this:

We can quickly solve for , and then use that to solve for by plugging in for . After that, we simply need to solve for in a similar fashion. In code, this involves keeping a rolling sum of all the values we substitute in like so:

```
function back_substitution(A::Array{Float64,2})
rows = size(A,1)
cols = size(A,2)
# Creating the solution Vector
soln = Vector{Float64}(rows)
for i = rows:-1:1
sum = 0.0
for j = rows:-1:i
sum += soln[j]*A[i,j]
end
soln[i] = (A[i, cols] - sum) / A[i, i]
end
return soln
end
function gauss_jordan_elimination(A::Array{Float64,2})
```

```
void back_substitution(const double *a, double *x, const size_t rows,
const size_t cols) {
for (int i = rows - 1; i >= 0; --i) {
double sum = 0.0;
for (size_t j = cols - 2; j > i; --j) {
sum += x[j] * a[i * cols + j];
}
x[i] = (a[i * cols + cols - 1] - sum) / a[i * cols + i];
}
}
```

```
fn back_substitution(a: &Matrix) -> Vec<f64> {
let mut soln = vec![0.0; a.rows];
soln[a.rows - 1] = a[(a.rows - 1, a.cols - 1)] / a[(a.rows - 1, a.cols - 2)];
for i in (0..a.rows - 1).rev() {
let mut sum = 0.0;
for j in (i..a.rows).rev() {
sum += soln[j] * a[(i, j)];
}
soln[i] = (a[(i, a.cols - 1)] - sum) / a[(i, i)];
}
soln
}
```

```
gaussJordan :: (Fractional a, Eq a) => Matrix a -> Matrix a
gaussJordan mat = go (r1, c1) mat
where
((r1, c1), (rn, cn)) = bounds mat
go (r, c) m
| c == cn = m
```

## Conclusions

And with that, we have two possible ways to reduce our system of equations. If we are sure our matrix is not singular and that a solution exists, it's fastest to use back-substitution to find our solution. If no solution exists or we are trying to find a reduced row echelon matrix, then Gauss-Jordan elimination is best. As we said at the start, the notation for Gaussian Elimination is rather ambiguous in the literature, so we are hoping that the definitions provided here are clear and consistent enough to cover all the bases.

As for what's next... Well, we are in for a treat! The above algorithm clearly has 3 `for`

loops, and will thus have a complexity of , which is abysmal! If we can reduce the matrix to a specifically **tridiagonal** matrix, we can actually solve the system in ! How? Well, we can use an algorithm known as the *Tri-Diagonal Matrix Algorithm* (TDMA) also known as the *Thomas Algorithm*.

## Example Code

```
function gaussian_elimination(A::Array{Float64,2})
rows = size(A,1)
cols = size(A,2)
# Row index
row = 1
# Main loop going through all columns
for col = 1:(cols-1)
# Step 1: finding the maximum element for each column
max_index = indmax(abs.(A[row:end,col])) + row-1
# Check to make sure matrix is good!
if (A[max_index, col] == 0)
println("matrix is singular!")
continue
end
# Step 2: swap row with highest value for that column to the top
temp_vector = A[max_index, :]
A[max_index, :] = A[row, :]
A[row, :] = temp_vector
# Loop for all remaining rows
for i = (row+1):rows
# Step 3: finding fraction
fraction = A[i,col]/A[row,col]
# loop through all columns for that row
for j = (col+1):cols
# Step 4: re-evaluate each element
A[i,j] -= A[row,j]*fraction
end
# Step 5: Set lower elements to 0
A[i,col] = 0
end
row += 1
end
end
function back_substitution(A::Array{Float64,2})
rows = size(A,1)
cols = size(A,2)
# Creating the solution Vector
soln = Vector{Float64}(rows)
for i = rows:-1:1
sum = 0.0
for j = rows:-1:i
sum += soln[j]*A[i,j]
end
soln[i] = (A[i, cols] - sum) / A[i, i]
end
return soln
end
function gauss_jordan_elimination(A::Array{Float64,2})
rows = size(A,1)
cols = size(A,2)
# After this, we know what row to start on (r-1)
# to go back through the matrix
row = 1
for col = 1:cols-1
if (A[row, col] != 0)
# divide row by pivot and leaving pivot as 1
for i = cols:-1:col
A[row,i] /= A[row,col]
end
# subtract value from above row and set values above pivot to 0
for i = 1:row-1
for j = cols:-1:col
A[i,j] -= A[i,col]*A[row,j]
end
end
row += 1
end
end
end
function main()
A = [2. 3 4 6;
1 2 3 4;
3 -4 0 10]
gaussian_elimination(A)
println(A)
gauss_jordan_elimination(A)
println(A)
soln = back_substitution(A)
println(soln)
end
main()
```

```
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
void swap_rows(double *a, const size_t i, const size_t pivot,
const size_t cols) {
for (size_t j = 0; j < cols; ++j) {
double tmp = a[i * cols + j];
a[i * cols + j] = a[pivot * cols + j];
a[pivot * cols + j] = tmp;
}
}
void gaussian_elimination(double *a, const size_t rows, const size_t cols) {
size_t row = 0;
for (size_t col = 0; col < cols - 1; ++col) {
size_t pivot = row;
for (size_t i = row + 1; i < rows; ++i) {
if (fabs(a[i * cols + col]) > fabs(a[pivot * cols + col])) {
pivot = i;
}
}
if (a[pivot * cols + col] == 0) {
printf("The matrix is singular.\n");
continue;
}
if (col != pivot) {
swap_rows(a, col, pivot, cols);
}
for (size_t i = row + 1; i < rows; ++i) {
double scale = a[i * cols + col] / a[row * cols + col];
for (size_t j = col + 1; j < cols; ++j) {
a[i * cols + j] -= a[row * cols + j] * scale;
}
a[i * cols + col] = 0;
}
row++;
}
}
void back_substitution(const double *a, double *x, const size_t rows,
const size_t cols) {
for (int i = rows - 1; i >= 0; --i) {
double sum = 0.0;
for (size_t j = cols - 2; j > i; --j) {
sum += x[j] * a[i * cols + j];
}
x[i] = (a[i * cols + cols - 1] - sum) / a[i * cols + i];
}
}
void gauss_jordan(double *a, const size_t rows, const size_t cols) {
int row = 0;
for (int col = 0; col < cols - 1; ++col) {
if (a[row * cols + col] != 0) {
for (int i = cols - 1; i > col - 1; --i) {
a[row * cols + i] /= a[row * cols + col];
}
for (int i = 0; i < row; ++i) {
for (int j = cols - 1; j > col - 1; --j) {
a[i * cols + j] -= a[i * cols + col] * a[row * cols + j];
}
}
row++;
}
}
}
int main() {
double a[3][4] = {{3.0, 2.0, -4.0, 3.0},
{2.0, 3.0, 3.0, 15.0},
{5.0, -3.0, 1.0, 14.0}};
gaussian_elimination((double *)a, 3, 4);
printf("Gaussian elimination:\n");
for (size_t i = 0; i < 3; ++i) {
printf("[");
for (size_t j = 0; j < 4; ++j) {
printf("%f ", a[i][j]);
}
printf("]\n");
}
printf("\nGauss-Jordan:\n");
gauss_jordan((double *)a, 3, 4);
for (size_t i = 0; i < 3; ++i) {
printf("[");
for (size_t j = 0; j < 4; ++j) {
printf("%f ", a[i][j]);
}
printf("]\n");
}
printf("\nSolutions are:\n");
double x[3] = {0, 0, 0};
back_substitution((double *)a, x, 3, 4);
printf("(%f,%f,%f)\n", x[0], x[1], x[2]);
return 0;
}
```

```
// submitted by jess 3jane
use std::cmp::min;
use std::ops::{Index, IndexMut};
pub struct Matrix {
rows: usize,
cols: usize,
data: Vec<f64>,
}
impl Matrix {
fn new(rows: usize, cols: usize, data: &[f64]) -> Matrix {
Matrix {
rows,
cols,
data: data.to_vec(),
}
}
fn swap_rows(&mut self, a: usize, b: usize) {
for col in 0..self.cols {
self.data.swap(a * self.cols + col, b * self.cols + col);
}
}
}
impl Index<(usize, usize)> for Matrix {
type Output = f64;
fn index(&self, (row, col): (usize, usize)) -> &f64 {
&self.data[row * self.cols + col]
}
}
impl IndexMut<(usize, usize)> for Matrix {
fn index_mut(&mut self, (row, col): (usize, usize)) -> &mut f64 {
&mut self.data[row * self.cols + col]
}
}
fn gaussian_elimination(a: &mut Matrix) {
for k in 0..min(a.cols, a.rows) {
// Step 1: find the maximum element for this column
let mut max_row = k;
let mut max_value = a[(k, k)].abs();
for row in (k + 1)..a.rows {
if max_value < a[(row, k)].abs() {
max_value = a[(row, k)].abs();
max_row = row;
}
}
// Check to make sure the matrix is good
if a[(max_row, k)] == 0.0 {
println!("Matrix is singular, aborting");
return;
}
// Step 2: swap the row with the highest value for this kumn to the top
a.swap_rows(k, max_row);
// Loop over all remaining rows
for i in k + 1..a.rows {
// Step 3: find the fraction
let fraction = a[(i, k)] / a[(k, k)];
// Loop through all columns for that row
for j in (k + 1)..a.cols {
// Step 4: re-evaluate each element
a[(i, j)] -= a[(k, j)] * fraction;
}
// Step 5: set lower elements to 0
a[(i, k)] = 0.0;
}
}
}
fn back_substitution(a: &Matrix) -> Vec<f64> {
let mut soln = vec![0.0; a.rows];
soln[a.rows - 1] = a[(a.rows - 1, a.cols - 1)] / a[(a.rows - 1, a.cols - 2)];
for i in (0..a.rows - 1).rev() {
let mut sum = 0.0;
for j in (i..a.rows).rev() {
sum += soln[j] * a[(i, j)];
}
soln[i] = (a[(i, a.cols - 1)] - sum) / a[(i, i)];
}
soln
}
fn main() {
// The example matrix from the text
let mut a = Matrix::new(
3,
4,
&vec![2.0, 3.0, 4.0, 6.0, 1.0, 2.0, 3.0, 4.0, 3.0, -4.0, 0.0, 10.0],
);
gaussian_elimination(&mut a);
let soln = back_substitution(&a);
println!("Solution: {:?}", soln);
}
```

```
import Data.Array
import Data.Function (on)
import Data.List (intercalate, maximumBy)
import Data.Ratio
type Matrix a = Array (Int, Int) a
type Vector a = Array Int a
swapRows :: Int -> Int -> Matrix a -> Matrix a
swapRows r1 r2 m
| r1 == r2 = m
| otherwise =
m //
concat [[((r2, c), m ! (r1, c)), ((r1, c), m ! (r2, c))] | c <- [c1 .. cn]]
where
((_, c1), (_, cn)) = bounds m
subRows ::
Fractional a
=> (Int, Int) -- pivot location
-> (Int, Int) -- rows to cover
-> (Int, Int) -- columns to cover
-> Matrix a
-> Matrix a
subRows (r, c) (r1, rn) (c1, cn) m =
accum
(-)
m
[ ((i, j), m ! (i, c) * m ! (r, j) / m ! (r, c))
| i <- [r1 .. rn]
, j <- [c1 .. cn]
]
gaussianElimination :: (Fractional a, Ord a) => Matrix a -> Matrix a
gaussianElimination mat = go (r1, c1) mat
where
((r1, c1), (rn, cn)) = bounds mat
go (r, c) m
| c == cn = m
| pivot == 0 = go (r, c + 1) m
| otherwise = go (r + 1, c + 1) $ subRows (r, c) (r + 1, rn) (c, cn) m'
where
(target, pivot) =
maximumBy (compare `on` abs . snd) [(k, m ! (k, c)) | k <- [r .. rn]]
m' = swapRows r target m
gaussJordan :: (Fractional a, Eq a) => Matrix a -> Matrix a
gaussJordan mat = go (r1, c1) mat
where
((r1, c1), (rn, cn)) = bounds mat
go (r, c) m
| c == cn = m
| m ! (r, c) == 0 = go (r, c + 1) m
| otherwise = go (r + 1, c + 1) $ subRows (r, c) (r1, r - 1) (c, cn) m'
where
m' = accum (/) m [((r, j), m ! (r, c)) | j <- [c .. cn]]
backSubstitution :: (Fractional a) => Matrix a -> Vector a
backSubstitution m = sol
where
((r1, _), (rn, cn)) = bounds m
sol =
listArray (r1, rn) [(m ! (r, cn) - sum' r) / m ! (r, r) | r <- [r1 .. rn]]
sum' r = sum [m ! (r, k) * sol ! k | k <- [r + 1 .. rn]]
printM :: (Show a) => Matrix a -> String
printM m =
let ((r1, c1), (rn, cn)) = bounds m
in unlines
[ intercalate "\t" [show $ m ! (r, c) | c <- [c1 .. cn]]
| r <- [r1 .. rn]
]
printV :: (Show a) => Vector a -> String
printV = unlines . map show . elems
main :: IO ()
main = do
let mat = [2, 3, 4, 6, 1, 2, 3, 4, 3, -4, 0, 10] :: [Ratio Int]
m = listArray ((1, 1), (3, 4)) mat
putStrLn "Original Matrix:"
putStrLn $ printM m
putStrLn "Echelon form"
putStrLn $ printM $ gaussianElimination m
putStrLn "Reduced echelon form"
putStrLn $ printM $ gaussJordan $ gaussianElimination m
putStrLn "Solution from back substitution"
putStrLn $ printV $ backSubstitution $ gaussianElimination m
```