Autoencoder: Denoise image using UpSampling2D and Conv2DTranspose Layers (Part: 3)

Photo by Bekky Bekks on Unsplash

For better understanding, this post is divided into three parts:

Part 1: GAN, Autoencoders: UpSampling2D and Conv2DTranspose

In this part, introductory part and I will discuss some basic terms and processes used in this tutorial. This will help us to get the concept and better understand the other parts of this tutorial.

Part 2: Denoising image with Upsampling Layer

This part will demonstrate how we can use upsampling method for denoising an image from their input. This part will be implemented using the notMNIST dataset.

Part 3: Denoising image with Transposed Convolution Layer

This part is similar to the previous part but I will use transposed convolution for denoising. This part will be covered using the infamous MNIST dataset.

Let’s start …

Part 3: Denoising image with Transposed Convolution Layer

In this part, we will use the handwritten image dataset name MNIST dataset. This is a well-known dataset and needs no introduction. So we will import the necessary libraries and load the dataset in our project.

As usual, we will use Keras with TensorFlow as a backend.

TensorFlow Datasets provides a collection of ready-to-use datasets for use with TensorFlow. We will import the MNIST dataset using load_data() method.

Output:

As we can see, our MNIST dataset consists of 60000 training images and 10000 test images. Each image has a 28x28 dimension and a single gray channel. We can scale images to [0.0,1.0] range for better handling in our model.

Now we will define some variables for our model.

Now we will visualize some random dataset samples.

Output:

random samples from dataset

As we have generated noisy images in our previous part, we will generate noisy image using noise_factor = 0.5.

We can check the noisy images against the original images:

Output:

Original image vs Noisy image

Our model consists of several Conv2D and two Conv2DTranspose layers. As an output, we will add one Conv2D layer. Layer parameters are self-explanatory and easily understandable.

Output:

We will use adam as an optimizer and binary_crossentropyas a loss function. We will use 50 epochs for this training.

Now we can predict some test samples and visualize them.

Let’s compare our predicted denoised images with original test images for better understanding.

Output:

prediction comparison

As you can see, it is possible to get an impressive result using a simple implementation. So that’s the idea of an autoencoder and how we can use it for denoising images. In this part, I demonstrated how we can use Conv2DTranspose for this purpose. Hope this will be helpful for your future learning.

All code samples for this part can be found here: Colab Link

Happy coding!

Data Science Enthusiast

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store