Autoencoder: Denoise image using UpSampling2D and Conv2DTranspose Layers (Part: 2)

Photo by Bekky Bekks on Unsplash

For better understanding, this post is divided into three parts:

Part 1: GAN, Autoencoders: UpSampling2D and Conv2DTranspose

In this part, introductory part and I will discuss some basic terms and processes used in this tutorial. This will help us to get the concept and better understand the other parts of this tutorial.

Part 2: Denoising image with Upsampling Layer

This part will demonstrate how we can use upsampling method for denoising an image from their input. This part will be implemented using the notMNIST dataset.

Part 3: Denoising image with Transposed Convolution Layer

This part is similar to the previous part but I will use transposed convolution for denoising. This part will be covered using the infamous MNIST dataset.

Let’s start …

Part 2: Denoising image with Upsampling Layer

Dataset and related libraries

# importing libraries
import tensorflow as tf
import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, Conv2DTranspose
from tensorflow.keras.constraints import max_norm
import matplotlib.pyplot as plt
import numpy as np
import gzip
%matplotlib inline

We can download the notMNIST dataset directly from the GitHub repository. We will use this repository for downloading notMNIST dataset.

# importing dataset from github
# link:
! wget -O train-images.gz! wget -O test-images.gz! wget -O train-labels.gz! wget -O test-labels.gz

As you can see, all of the downloaded data are in compressed format, in this case .gz format. We can manually extract those images and use them in our project. But here we will define two functions to extract and load images from this compressed dataset. The first function will extract images and the second function for loading labels. We don’t need labels for training our model but for visualization purposes, we are extracting labels too.

# function for extracting images
def image_data(filename, num_images):
with as f:
buf = * 28 * num_images)
data = np.frombuffer(buf, dtype=np.uint8).astype(np.float32)
data = data.reshape(num_images, 28,28)
return data
# function for extracting labels
def image_labels(filename, num_images):
with as f:
buf = * num_images)
labels = np.frombuffer(buf, dtype=np.uint8).astype(np.int64)
return labels

Now let’s import all train and test images.

# import train and test data
train_data = image_data(‘train-images.gz’, 60000)
test_data = image_data(‘test-images.gz’, 10000)
# import train and test labels
train_labels = image_labels(‘train-labels.gz’, 60000)
test_labels = image_labels(‘test-labels.gz’, 10000)

After loading, we can check the shape of train and test images:

train_data.shape, test_data.shape


((60000, 28, 28), (10000, 28, 28))

So in our train dataset, we have 60000 images and in test samples, we have 10000 images. With our predefined function, they are already converted in NumPy array.

img_width, img_height = 28, 28
input_shape = (img_width, img_height, 1)
batch_size = 120
no_epochs = 50
validation_splits = 0.2
max_norm_value = 2.0
noise_factor = 0.5

Here we defined some variables for model building purpose. These variables will give us flexibility and help us to fine-tune our model and data in the future. Now we can proceed to the next step.

Exploring Dataset

label_dict = {i: a for i,a in zip(range(10), ‘ABCDEFGHIJ’)}

Now we can visualize some random characters with their labels:

for i in range(num_viz):
digits = [[train_data[idx], train_labels[idx]] for idx in
np.random.randint(len(train_data), size=10)]
plt.figure(figsize=(len(digits), 1))
for i, data in enumerate(digits):
plt.subplot(1, len(digits), i+1)
Random samples from the dataset

This dataset is a collection of single-channel greyscale images. It is better to scale each image in[0,1] range because they are easier to deal with. This process is known as “normalization” or “transformation” and is part of the feature engineering process.

# scaling train and test images
train_data = train_data.reshape(-1, 28,28, 1)
test_data = test_data.reshape(-1, 28,28, 1)
train_data = train_data / np.max(train_data)
test_data = test_data / np.max(test_data)

Generating Noisy Images

# create noisy image from dataset
noise_train = train_data + noise_factor *
noise_test = test_data + noise_factor *
np.random.normal(0, 1, test_data.shape)

Let’s check noisy images against their original images:

# some random images for visualization
fig, ax = plt.subplots(1,15)
fig.set_size_inches(20, 4)
for i in range(15):
curr_img = np.reshape(train_data[i], (28,28))
curr_lbl = train_labels[i]
#ax[i].set_title(f’Label: {label_dict[curr_lbl]}’)
fig, ax = plt.subplots(1,15)
fig.set_size_inches(20, 4)
for i in range(15):
curr_img = np.reshape(noise_train[i], (28,28))
curr_lbl = train_labels[i]
#ax[i].set_title(f’Label: {label_dict[curr_lbl]}’)


Original image vs Noisy image

Seems they are noisy enough to train our model. Now we will define our model parameters.

Defining Model

# model layers for autoencoder
model = Sequential()
model.add(Conv2D(32, (3, 3),
model.add(MaxPooling2D((2, 2), padding='same'))model.add(Conv2D(64, (3, 3),
model.add(MaxPooling2D((2, 2), padding='same'))model.add(Conv2D(128, (3, 3),
model.add(Conv2D(128, (3, 3),
model.add(UpSampling2D((2, 2), interpolation='bilinear'))model.add(Conv2D(64, (3, 3),
model.add(UpSampling2D((2, 2), interpolation='bilinear'))model.add(Conv2D(1, (3, 3),


Model: "sequential" _________________________________________________________________ Layer (type)                 Output Shape              Param #    ================================================================= conv2d_28 (Conv2D)           (None, 28, 28, 32)        320        _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 14, 14, 32)        0          _________________________________________________________________ conv2d_29 (Conv2D)           (None, 14, 14, 64)        18496      _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 7, 7, 64)          0          _________________________________________________________________ conv2d_30 (Conv2D)           (None, 7, 7, 128)         73856      _________________________________________________________________ conv2d_31 (Conv2D)           (None, 7, 7, 128)         147584     _________________________________________________________________ up_sampling2d_9 (UpSampling2 (None, 14, 14, 128)       0          _________________________________________________________________ conv2d_32 (Conv2D)           (None, 14, 14, 64)        73792      _________________________________________________________________ up_sampling2d_10 (UpSampling (None, 28, 28, 64)        0          _________________________________________________________________ conv2d_33 (Conv2D)           (None, 28, 28, 1)         577        ================================================================= Total params: 314,625 
Trainable params: 314,625
Non-trainable params: 0 _________________________________________________________________

The dot plot of this model shows the structure for our model.

Model to DOT plot

We will use adam as an optimizer and binary_crossentropyas a loss function. We will use 50 epochs for this training.

# compiling and fitting model
loss = ‘binary_crossentropy’),
validation_split= validation_splits,

Prediction and Visualization

# model prediction
fig_samples = noise_test[:10]
fig_original = test_data[:10]
fig_denoise = model.predict(fig_samples)

Let’s compare our predicted denoised images with original test images for better understanding.

# output visualization
for i in range(0, 5):
noisy_img = noise_test[i]
original_img = test_data[i]
denoise_img = fig_denoise[i]

fig, axes = plt.subplots(1, 3)
fig.set_size_inches(6, 2.8)
axes[0].imshow(noisy_img.reshape(28, 28))
axes[1].imshow(original_img.reshape(28, 28))
axes[2].imshow(denoise_img.reshape(28, 28))


prediction comparison

The output is pretty outstanding in this case. Our generated images are quite similar to original images. We can fine-tune and try to make it better for other datasets also. This is the basic concept for this model and it's up to the user how they want to play with them.

Hope you get the idea of autoencoder and denoising images. We will develop another model using Conv2DTranspose layer using different datasets in the next part of the tutorial.

All code samples for this part can be found here: Colab Link

🅽🅴🆇🆃 ⫸ Part 3: Denoising image using Transposed Convolution Layer

Happy coding!

Data Science Enthusiast