Alright, I am going to come right out and say it: convolutions can be confusing. Not only are they hard to really describe, but if you do not see them in practice, it's hard to understand why you would ever want to use them. I'm going to do what I can to describe them in an intuitive way; however, I may need to come back to this in the future. Let me know if there is anything here that is unclear, and I'll do what I can to clear it up.

If you take two functions and , there are a number of ways you can combine them. All basic operations can do this (addition, subtraction, multiplication, and division), but there are also special operations that only work with functions and do not work on standard variables or numbers. For example, is a composition of the two functions, where you plug into . A convolution is another function-related operation, and is often notated with a star () operator, where

provides a third function that blends and .

As a rather important side-note: there is an incredibly similar operator known as a correlation which will be discussed in the near future. For now, let's focus on convolutions, which are defined as:

Note that in this case, is not necessarily a spatial element. Often times, it is time or something else entirely! The easiest way to think about this is that the function is being shifted across all of space by the variable . At every point , we multiply and and integrate the multiplied output to find the convolution for that spatial step, . Note that in code, this is often discretized to look like:

Where f[n] and g[n] are arrays of some form. This means we basically just need to keep one array steady, flip the second array around, and move it through the first array one step at a time, performing a simple element-wise multiplication each step.

In code, this looks something like:

function conv(signal1::Vector{Complex}, signal2::Vector{Complex})
    n = length(signal1) + length(signal2)
    out = Vector{Complex}(n)
    sum = 0

    for i = 0:n
        for j = 0:i
            if(j < length(signal1))
                sum += signal1[j] * signal2[i-j]
        out[i] = sum
        sum = 0

    return out
convolution :: (Num a) => [a] -> [a] -> [a]
convolution x = map (sum . zipWith (*) (reverse x)) . spread
    spread = init . tails . (replicate (length x - 1) 0 ++)
void conv(double complex *signal1, double complex *signal2, double complex* out,
          size_t n1, size_t n2) {
    double complex sum = 0;

    for (size_t i = 0; i < (n1 < n2? n2 : n1); ++i) {
        for (size_t j = 0; j < i; ++j) {
            if (j < n1) {
                sum += signal1[j] * signal2[i-j];
        out[i] = sum;
        sum = 0;
template <typename S1, typename S2, typename Out>
void conv(
    S1 const s1,
    S1 const s1_last,
    S2 const s2,
    S2 const s2_last,
    Out const out) {
  auto const size1 = s1_last - s1;
  auto const size2 = s2_last - s2;
  auto const size = size1 + size2;

  for (ptrdiff_t i = 0; i < size; ++i) {
    c64 sum = 0;
    for (ptrdiff_t j = 0; j < i; ++j) {
      if (j < size1) {
        sum += s1[j] * s2[i - j];
    out[i] = sum;
def conv(signal1, signal2):
    """Discrete convolution by definition"""

    n = len(signal1) + len(signal2) - 1
    out = []

    for i in range(n):
        s = 0

        for j in range(i + 1):
            if j < len(signal1) and i - j  < len(signal2):
                s += signal1[j] * signal2[i - j]


    return out

Note that in this case, the output array will be the size of f[n] and g[n] put together. Sometimes, though, we have an large size for f[n] and a small size for g[n]. In this case g[n] is often called a filter, and often times when we are using a filter on an array (that might represent an image or some form of data), we want the output array to be the same size as the input. In this case, rather than outputting a larger array, we often do something special at the borders of the array. Depending on the situation, this may be necessary. Note that there are different methods to deal with the edges in this case, so it's best to do whatever seems right when the situation arises.

Convolutional Theorem

Now, let me tell you about a bit of black computational magic:

Convolutions can be performed with Fourier Transforms!

That is crazy! It's also incredibly hard to explain, so let me do my best. As described in the chapter on Fourier Transforms, Fourier Transforms allow programmers to move from real space to frequency space. When we transform a wave into frequency space, we see a single peak in frequency space related to the frequency of that wave. No matter what function we send into a Fourier Transform, the frequency-space image can be interpreted as a series of different waves with a specified frequency.

So here's the idea: if we take two functions and and move them to frequency space to be and , we can then multiply those two functions and transform them back into a third function to blend the signals together. In this way, we will have a third function that relates the frequency-space images of the two input functions. This is precisely a convolution!

Don't believe me? Well, this is because of something known as the convolution theorem which looks something like this:

Where denotes the Fourier Transform. Now, by using a Fast Fourier Transform (fft) in code, this can take a standard convolution on two arrays of length , which is an process, to . This means that the convolution theorem is fundamental to creating fast convolutional methods for large inputs, assuming that both of the input signals are similar sizes. That said, it is debatable whether the convolution theorem will be faster when the filter size is small. Also: depending on the language used, we might need to read in a separate library for FFT's.

That said, Julia has an in-built fft routine, so the code for this method could not be simpler:

function conv_fft(signal1::Vector{Complex}, signal2::Vector{Complex})
    return ifft(fft(signal1).*fft(signal2))

Where the .* operator is an element-wise multiplication.

convolutionFFT :: [Complex Double] -> [Complex Double] -> [Complex Double]
convolutionFFT x y = elems $ idft $ liftArray2 (*) (fft x) (fft y)
    fft a = dft $ listArray (1, length a) a

Where the .* operator is an element-wise multiplication.

void conv_fft(double complex *signal1, double complex *signal2,
              double complex* out, size_t n) {
    fft(signal1, n);
    fft(signal2, n);

    for (size_t i = 0; i < n; ++i) {
        out[i] = signal1[i] * signal2[i];

    ifft(out, n);
template <typename S1, typename S2, typename Out>
void conv_fft(S1 const s1, S1 const s1_last, S2 const s2, Out const out) {
  auto const size = s1_last - s1;

  fft(s1, s1_last);
  fft(s2, s2 + size);

  auto s1_it = s1;
  auto s2_it = s2;
  auto out_it = out;
  for (; s1_it != s1_last; ++s1_it, ++s2_it, ++out_it) {
    *out_it = *s1_it * *s2_it;

  inverse_fft(out, out_it);
def conv_fft(signal1, signal2):
    """Convolution using fft and convolutional theorem"""

    signal1 = signal1.copy()
    signal2 = signal2.copy()

    # pad signals to same len
    max_len = max(len(signal1), len(signal2))

    for i in range(max_len - len(signal1)):
    for i in range(max_len - len(signal2)):

    fft_s1 = fft(signal1)
    fft_s2 = fft(signal2)
    out = []

    for i in range(len(signal1)):
        out.append(fft_s1[i] * fft_s2[i])

    return list(ifft(out))

This method also has the added advantage that it will always output an array of the size of your signal; however, if your signals are not of equal size, we need to pad the smaller signal with zeros. Also note that the Fourier Transform is a periodic or cyclical operation, so there are no real edges in this method, instead the arrays "wrap around" to the other side. For this reason, this convolution is often called a cyclic convolution instead of a linear convolution like above. Note that cyclic convolutions can definitely still be done without Fourier Transforms and we can do linear convolutions with Fourier Transforms, but it makes the code slightly more complicated than described above.

results matching ""

    No results matching ""