GURU Hardware Requirements Software Requirements Work Done/Analysis Future

GURU GOBIND SINGH INDRAPRASTHA UNIVERSITY

MINOR PROJECT
CREATING A FULLY CONVOLUTIONAL NETWORK (FCN), FOR IMAGE SEGMENTATION, FOR APPLICATION IN FIELDS SUCH AS AUTONOMOUS/SELF DRIVING CARS, MEDICAL IMAGING, ETC.MIDTERM REPORT
Submitted to:
Prof. Anjana Gosain Professor
(USICT, GGSIPU)
Submitted by:
Sanchit Rustagi60116401515
B. Tech. (IT) 7th Semester
Contents:-
Title
Contents
Abbreviations
Aim
Purpose/Problem Statement
Introduction
Artificial Neural Networks
Convolutional Neural Networks
Image Classification
Image Segmentation
Fully Convolutional Networks
Hardware and Software Requirements:-
Hardware Requirements
Software Requirements
Work Done/Analysis
Future Work
References
Abbreviations:-
ANN: Artificial Neural Networks
CNN: Convolutional Neural Networks
FCN: Fully Convolutional Networks
ConvoNet: Convolutional Network
Conv: Convolutional
FC: Fully Connected
Deconv: DeconvolutionalAim:-
To create a Fully Convolutional Network (FCN), for Image Segmentation, so that it can be applied in fields such as Autonomous/Self driving cars, Medical Imaging, etc.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Purpose/Problem Statement:-
To create a Fully Convolutional Network (FCN) by transforming a Convolutional Neural Network (CNN), pre-trained for the purpose of image classification, so that it can be used for the purpose of image segmentation and hence can be applied to various fields such as Autonomous vehicles, by applying techniques such as pedestrian detection, drivable road detection, etc., medical imaging, etc.

There is a need for Automated/machine guided processes in this rapidly advancing technological world so that tasks can be performed more precisely, accurately, with as little as possible damage to property and lives, and so that tasks can be economically performed.

Introduction:-
Artificial Neural Networks:-
Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems “learn” to perform tasks by considering examples, generally without being programmed with any task-specific rules.

An ANN is based on a collection of connected units or nodes called artificial neurons which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it.

In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called ‘edges’. Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.

Convolutional Neural Networks:-
Convolutional Neural Networks are very similar to ordinary Neural Networks. The thing that changes is ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network. They are mainly used for the purpose of image classification, explained next.

Three main types of layers are used to build CNN architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer (exactly as seen in regular Neural Networks). These layers are stacked to form a full CNN architecture.

Example architecture overview:
INPUT 32x32x3 will hold the raw pixel values of the image, in this case an image of width 32, height 32, and with three color channels R,G,B.

CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as 32x32x12 if we decided to use 12 filters.

RELU layer will apply an elementwise activation function, such as the max(0,x) thresholding at zero. This leaves the size of the volume unchanged (32x32x12).

POOL layer will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as 16x16x12.

FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size 1x1x10, where each of the 10 numbers correspond to a class score. As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume.

Image Classification:-
The intent of the classification process is to categorize all pixels in a digital image into one of several land cover classes, or “themes”. This categorized data may then be used to produce thematic maps of the land cover present in an image. Normally, multispectral data are used to perform the classification and, indeed, the spectral pattern present within the data for each pixel is used as the numerical basis for categorization. The objective of image classification is to identify and portray, as a unique gray level (or color), the features occurring in an image in terms of the object or type of land cover these features actually represent on the ground.

There are two types of image classification: Supervised and Unsupervised. Supervised classification uses the spectral signatures obtained from training samples to classify an image. Unsupervised classification finds spectral classes (or clusters) in a multiband image without the analyst’s intervention.

Image Segmentation:-
Image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as super-pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyse. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted objects or features of interest.

The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Each of the pixels in a region are similar with respect to some characteristic or computed property, such as colour, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).

Segmentation task is different from classification task because it requires predicting a class for each pixel of the input image, instead of only 1 class for the whole input. Classification needs to understand what is in the input (namely, the context). However, in order to predict what is in the input for each pixel, segmentation needs to recover not only what is in the input, but also where.

Fully Convolutional Networks:-
A Fully Convolutional neural network (FCN) is a normal CNN, where the last fully connected layer is substituted by another convolution layer with a large “receptive field”. The idea is to capture the global context of the scene (Tell us what we have in the image and also give some very rough idea of the locations of things). It’s important to remember that when we convert our last fully connected (FC) layer to a convolutional layer we gain some form of localization if we look at where we have more activations. The idea is that if we choose our new last conv layer to be big enough we will have this localization effect scaled up to our input image size.

FCNs owe their name to their architecture, which is built only from locally connected layers, such as convolution, pooling and upsampling. Note that no dense layer is used in this kind of architecture. This reduces the number of parameters and computation time. Also, the network can work regardless of the original image size, without requiring any fixed number of units at any stage, givent that all connections are local. To obtain a segmentation map (output), segmentation networks usually have 2 parts :Downsampling path : capture semantic/contextual information
Upsampling path : recover spatial information
The downsampling path is used to extract and interpret the context (what), while the upsampling path is used to enable precise localization (where). Furthermore, to fully recover the fine-grained spatial information lost in the pooling or downsampling layers, we often use skip connections.

A skip connection is a connection that bypasses at least one layer. Here, it is often used to transfer local information by concatenating or summing feature maps from the downsampling path with feature maps from the upsampling path. Merging features from various resolution levels helps combining context information with spatial information.

Here is how we convert a normal CNN used for classification, to a FCN used for segmentation:
We start with a normal CNN for classification
The second step is to convert all the FC layers to convolution layers 1×1 we don’t even need to change the weights at this point. (This is already a fully convolutional neural network). The nice property of FCN networks is that we can now use any image size.

The last step is to use a “deconv or transposed convolution” layer to recover the activation positions to something meaningful related to the image size. Imagine that we’re just scaling up the activation size to the same image size. This last “upsampling” layer also has some learnable parameters.

Hardware and Software Requirements:-
Hardware Requirements:-
There are no hardware constraints on this project. Any generic hardware can run the program, only the speed and estimated time amount for the completion/running of the code will be affected.

Software Requirements:-
Software components used are as follows:
Operating System: Windows 10
Programming Language: Python 3
Python Distribution: Anaconda
Integrated Development Environment (IDE): SpyderLibraries/Dependencies:
KerasTensorflow (Backend)
Theano (Backend Optional)
SciPyNumPyData Sets: Asirra Cats and Dogs dataset (Kaggle) (10,000 images; 8,000 in training set, 2000 in test/validation set).

Work Done/Analysis:-
A CNN is created using the Keras library, while using Tensorflow as the backend.

The CNN is initialised using the sequential model imported from keras.models.

The convolutional layers and the pooling layers are added, the functions of which are applying the convolution method on the input image and max pooling, which basically is a technique used to reduce the dimensions of an image by taking the maximum pixel value of a grid and also helps to reduce overfitting and makes the model more generic, respectively. The number of layers depends on the iteration:
The first iteration had just a single convolutional 2D layer, with 32 filters, a 3×3 convolutional window, and relu (Rectified Linear Unit) activation function. Only a single pooling layer is present with a 2×2 pooling window size.

The second iteration consists of 2 convolutional 2D layers, each having 32 filters, a 3×3 convolutional window, and relu activation function. 2 pooling layers are present, one after each convolutional layer with a 2×2 pooling window size.

The third iteration consists of 3 convolutional 2D layers, with the first two having 32 filters, a 3×3 convolutional window, and relu activation function, and the third having 64 filters, 3×3 convolutional window, and relu activation function. There are 3 pooling layers, one after each convolutional layer, with 2×2 window size.

A flattening layer is then added to flatten the output of the final pooling layer into a large single vector. This doesn’t affect the batch size.

The fully connected layer is then made up of a hidden neuron layer, which takes the output of the flattening layer as it’s input and passes it to the output node of the fully connected layer. The hidden layer comprises of 128 nodes and uses relu as its activation function, as it has to determine the probability of the outcome. There is only one final Output node, which uses the sigmoid function, as it has to give a Boolean output of the image being classified as either of the two outputs
It’s then compiled with optimizer set as ‘adam’, loss as ‘binary_crossentropy’, accuracy taken for the metrics field.
As there are only 10,000 total images in the data set, we take the help of ImageDataGenerator, imported from keras.preprocessing.image, to overcome the potential problem of overfitting. The ImageDataGenerator is used to apply transformation on the data set so as to produce a larger number of images than what we originally have.

For the training set, we apply the following transfromations:
Rescale by a factor of 1./255
Shear range set as 0.2
Zoom range set as 0.2
And horizontal flip set as true
For the validation set, we just apply a rescaling by a factor of 1./255.

The ImageDataGenerator is now called and fitted on the training set with a target size of 64×64 (the size at which all target images will be resized). Batch size is set as 32 and with class mode as binary, as a binary output is expected.

It’s now called and fitted on the test set with same specifications as that of the training set.

Finally, the generator is now fit on the classifier object model, with number of epochs set as 25 and training set, it’s samples per epoch, validation set, and it’s samples per epoch specified respectively.

The first iteration of the CNN provided an accuracy of 75.10% on the validation data set.

The second iteration of the CNN had an improvement in the accuracy provided on the validation set by 8.41%, reaching an accuracy of 83.51%.

The third iteration of the CNN had an improvement in the accuracy provided on the validation set by 2.44%, finally reaching an accuracy of 85.95%.

The code and output for each iteration is given below:
First Iteration:-
Code:
# Convolutional Neural Network
#1. Building the CNN
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Convolution
classifier.add(Convolution2D(32, (3, 3), activation = ‘relu’, input_shape = (64, 64, 3)))
# Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Flattening
classifier.add(Flatten())
# Full connection
classifier.add(Dense(activation = ‘relu’, units=128))
classifier.add(Dense(activation = ‘sigmoid’, units=1))
# Compiling the CNN
classifier.compile(optimizer = ‘adam’, loss = ‘binary_crossentropy’, metrics = ‘accuracy’)
#2. Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGeneratortrain_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory(‘C:/Users/Rusty/Desktop/minor project/Convolutional_Neural_Networks/dataset/training_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)
test_set = test_datagen.flow_from_directory(‘C:/Users/Rusty/Desktop/minor project/Convolutional_Neural_Networks/dataset/test_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)
classifier.fit_generator(training_set,
samples_per_epoch = 8000,
nb_epoch = 25,
validation_data = test_set,
nb_val_samples = 2000)
Second Iteration:-
Code:
# Convolutional Neural Network
#1. Building the CNN
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Convolution
classifier.add(Convolution2D(32, (3, 3), activation = ‘relu’, input_shape = (64, 64, 3)))
# Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Convolution2D(32, (3, 3), activation = ‘relu’))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Flattening
classifier.add(Flatten())
# Full connection
classifier.add(Dense(activation = ‘relu’, units=128))
classifier.add(Dense(activation = ‘sigmoid’, units=1))
# Compiling the CNN
classifier.compile(optimizer = ‘adam’, loss = ‘binary_crossentropy’, metrics = ‘accuracy’)
#2. Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGeneratortrain_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory(‘C:/Users/Rusty/Desktop/minor project/Convolutional_Neural_Networks/dataset/training_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)
test_set = test_datagen.flow_from_directory(‘C:/Users/Rusty/Desktop/minor project/Convolutional_Neural_Networks/dataset/test_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)
classifier.fit_generator(training_set,
samples_per_epoch = 8000,
nb_epoch = 25,
validation_data = test_set,
nb_val_samples = 2000)
Output:
Epoch 1/25
250/250 ============================== – 535s 2s/step – loss: 0.6838 – acc: 0.5437 – val_loss: 0.6604 – val_acc: 0.6151
Epoch 2/25
250/250 ============================== – 429s 2s/step – loss: 0.6125 – acc: 0.6655 – val_loss: 0.5573 – val_acc: 0.7209
Epoch 3/25
250/250 ============================== – 432s 2s/step – loss: 0.5536 – acc: 0.7160 – val_loss: 0.5361 – val_acc: 0.7360
Epoch 4/25
250/250 ============================== – 430s 2s/step – loss: 0.5307 – acc: 0.7364 – val_loss: 0.5463 – val_acc: 0.7214
Epoch 5/25
250/250 ============================== – 419s 2s/step – loss: 0.5026 – acc: 0.7512 – val_loss: 0.5253 – val_acc: 0.7315
Epoch 6/25
250/250 ============================== – 415s 2s/step – loss: 0.4781 – acc: 0.7692 – val_loss: 0.4641 – val_acc: 0.7834
Epoch 7/25
250/250 ============================== – 414s 2s/step – loss: 0.4670 – acc: 0.7755 – val_loss: 0.4608 – val_acc: 0.7836
Epoch 8/25
250/250 ============================== – 413s 2s/step – loss: 0.4505 – acc: 0.7869 – val_loss: 0.4688 – val_acc: 0.7801
Epoch 9/25
250/250 ============================== – 411s 2s/step – loss: 0.4409 – acc: 0.7955 – val_loss: 0.4624 – val_acc: 0.7815
Epoch 10/25
250/250 ============================== – 412s 2s/step – loss: 0.4366 – acc: 0.7931 – val_loss: 0.4516 – val_acc: 0.7960
Epoch 11/25
250/250 ============================== – 411s 2s/step – loss: 0.4212 – acc: 0.8026 – val_loss: 0.4247 – val_acc: 0.8094
Epoch 12/25
250/250 ============================== – 411s 2s/step – loss: 0.4091 – acc: 0.8123 – val_loss: 0.4341 – val_acc: 0.8048
Epoch 13/25
250/250 ============================== – 410s 2s/step – loss: 0.4018 – acc: 0.8164 – val_loss: 0.4336 – val_acc: 0.7995
Epoch 14/25
250/250 ============================== – 410s 2s/step – loss: 0.3930 – acc: 0.8189 – val_loss: 0.4273 – val_acc: 0.8095
Epoch 15/25
250/250 ============================== – 428s 2s/step – loss: 0.3846 – acc: 0.8270 – val_loss: 0.4265 – val_acc: 0.8145
Epoch 16/25
250/250 ============================== – 415s 2s/step – loss: 0.3736 – acc: 0.8267 – val_loss: 0.4175 – val_acc: 0.8185
Epoch 17/25
250/250 ============================== – 413s 2s/step – loss: 0.3720 – acc: 0.8290 – val_loss: 0.4111 – val_acc: 0.8259
Epoch 18/25
250/250 ============================== – 411s 2s/step – loss: 0.3540 – acc: 0.8389 – val_loss: 0.4202 – val_acc: 0.8167
Epoch 19/25
250/250 ============================== – 456s 2s/step – loss: 0.3500 – acc: 0.8430 – val_loss: 0.4109 – val_acc: 0.8285
Epoch 20/25
250/250 ============================== – 447s 2s/step – loss: 0.3504 – acc: 0.8389 – val_loss: 0.4028 – val_acc: 0.8291
Epoch 21/25
250/250 ============================== – 414s 2s/step – loss: 0.3352 – acc: 0.8544 – val_loss: 0.4052 – val_acc: 0.8351
Epoch 22/25
250/250 ============================== – 417s 2s/step – loss: 0.3325 – acc: 0.8505 – val_loss: 0.4052 – val_acc: 0.8264
Epoch 23/25
250/250 ============================== – 432s 2s/step – loss: 0.3281 – acc: 0.8543 – val_loss: 0.4149 – val_acc: 0.8156
Epoch 24/25
250/250 ============================== – 432s 2s/step – loss: 0.3140 – acc: 0.8596 – val_loss: 0.4531 – val_acc: 0.8135
Epoch 25/25
250/250 ============================== – 424s 2s/step – loss: 0.3004 – acc: 0.8669 – val_loss: 0.4727 – val_acc: 0.8037
Third Iteration:-
Code:
# Convolutional Neural Network
#1. Building the CNN
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Convolution
classifier.add(Convolution2D(32, (3, 3), activation = ‘relu’, input_shape = (64, 64, 3)))
# Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Convolution2D(32, (3, 3), activation = ‘relu’))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a third convolutional layer
classifier.add(Convolution2D(64, (3, 3), activation = ‘relu’))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Flattening
classifier.add(Flatten())
# Full connection
classifier.add(Dense(activation = ‘relu’, units=128))
classifier.add(Dense(activation = ‘sigmoid’, units=1))
# Compiling the CNN
classifier.compile(optimizer = ‘adam’, loss = ‘binary_crossentropy’, metrics = ‘accuracy’)
#2. Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGeneratortrain_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory(‘C:/Users/Rusty/Desktop/minor project/Convolutional_Neural_Networks/dataset/training_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)
test_set = test_datagen.flow_from_directory(‘C:/Users/Rusty/Desktop/minor project/Convolutional_Neural_Networks/dataset/test_set’,
target_size = (64, 64),
batch_size = 32,
class_mode = ‘binary’)
classifier.fit_generator(training_set,
samples_per_epoch = 8000,
nb_epoch = 25,
validation_data = test_set,
nb_val_samples = 2000)
Output:
Epoch 1/25
250/250 ============================== – 448s 2s/step – loss: 0.6787 – acc: 0.5591 – val_loss: 0.6122 – val_acc: 0.6715
Epoch 2/25
250/250 ============================== – 436s 2s/step – loss: 0.5961 – acc: 0.6789 – val_loss: 0.5616 – val_acc: 0.7141
Epoch 3/25
250/250 ============================== – 442s 2s/step – loss: 0.5339 – acc: 0.7329 – val_loss: 0.5277 – val_acc: 0.7353
Epoch 4/25
250/250 ============================== – 435s 2s/step – loss: 0.4951 – acc: 0.7609 – val_loss: 0.4530 – val_acc: 0.7909
Epoch 5/25
250/250 ============================== – 444s 2s/step – loss: 0.4773 – acc: 0.7710 – val_loss: 0.4535 – val_acc: 0.7910
Epoch 6/25
250/250 ============================== – 434s 2s/step – loss: 0.4469 – acc: 0.7894 – val_loss: 0.4142 – val_acc: 0.8128
Epoch 7/25
250/250 ============================== – 427s 2s/step – loss: 0.4264 – acc: 0.7994 – val_loss: 0.4108 – val_acc: 0.8208
Epoch 8/25
250/250 ============================== – 423s 2s/step – loss: 0.4178 – acc: 0.8081 – val_loss: 0.3787 – val_acc: 0.8293
Epoch 9/25
250/250 ============================== – 426s 2s/step – loss: 0.4005 – acc: 0.8115 – val_loss: 0.4063 – val_acc: 0.8156
Epoch 10/25
250/250 ============================== – 429s 2s/step – loss: 0.3857 – acc: 0.8251 – val_loss: 0.3885 – val_acc: 0.8330
Epoch 11/25
250/250 ============================== – 441s 2s/step – loss: 0.3691 – acc: 0.8364 – val_loss: 0.3798 – val_acc: 0.8300
Epoch 12/25
250/250 ============================== – 456s 2s/step – loss: 0.3533 – acc: 0.8440 – val_loss: 0.3567 – val_acc: 0.8473
Epoch 13/25
250/250 ============================== – 471s 2s/step – loss: 0.3511 – acc: 0.8423 – val_loss: 0.3773 – val_acc: 0.8300
Epoch 14/25
250/250 ============================== – 444s 2s/step – loss: 0.3465 – acc: 0.8458 – val_loss: 0.3653 – val_acc: 0.8395
Epoch 15/25
250/250 ============================== – 434s 2s/step – loss: 0.3309 – acc: 0.8603 – val_loss: 0.3483 – val_acc: 0.8460
Epoch 16/25
250/250 ============================== – 434s 2s/step – loss: 0.3146 – acc: 0.8615 – val_loss: 0.3833 – val_acc: 0.8416
Epoch 17/25
250/250 ============================== – 434s 2s/step – loss: 0.3072 – acc: 0.8624 – val_loss: 0.3533 – val_acc: 0.8524
Epoch 18/25
250/250 ============================== – 438s 2s/step – loss: 0.3002 – acc: 0.8666 – val_loss: 0.3479 – val_acc: 0.8500
Epoch 19/25
250/250 ============================== – 429s 2s/step – loss: 0.2959 – acc: 0.8685 – val_loss: 0.3521 – val_acc: 0.8584
Epoch 20/25
250/250 ============================== – 439s 2s/step – loss: 0.2840 – acc: 0.8736 – val_loss: 0.3788 – val_acc: 0.8390
Epoch 21/25
250/250 ============================== – 437s 2s/step – loss: 0.2819 – acc: 0.8764 – val_loss: 0.3551 – val_acc: 0.8530
Epoch 22/25
250/250 ============================== – 428s 2s/step – loss: 0.2633 – acc: 0.8851 – val_loss: 0.3913 – val_acc: 0.8389
Epoch 23/25
250/250 ============================== – 429s 2s/step – loss: 0.2607 – acc: 0.8884 – val_loss: 0.3395 – val_acc: 0.8586
Epoch 24/25
250/250 ============================== – 440s 2s/step – loss: 0.2541 – acc: 0.8940 – val_loss: 0.3824 – val_acc: 0.8398
Epoch 25/25
250/250 ============================== – 417s 2s/step – loss: 0.2475 – acc: 0.8928 – val_loss: 0.3313 – val_acc: 0.8595
Future Work:-
The CNN thus created will first be improved by applying several combination of the layers and input parameters, to improve its accuracy.

Thus it will then be ready for its conversion into a FCN by changing the Fully Connected/Dense layer into a convolutional layer with a larger scope.

The data set will then be used as the input for the process of image segmentation.

Finally, the FCN will be ready to be used for the purpose of Autonomous Vehicle, Medical imaging, etc.

References:-
Image Segmentation. https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/image_segmentation.htmlImage segmentation. https://en.wikipedia.org/wiki/Image_segmentation#Applications
Artificial neural network. https://en.wikipedia.org/wiki/Artificial_neural_networkImage Classification. http://www.sc.chula.ac.th/courseware/2309507/Lecture/remote18.htmWhat is image classification? http://desktop.arcgis.com/en/arcmap/latest/extensions/spatial-analyst/image-classification/what-is-image-classification-.htmAbdellatif Abdelfattah. (2017, July 28). Image Classification using Deep Neural Networks?—?A beginner friendly approach using TensorFlow. https://medium.com/@tifa2up/image-classification-using-deep-neural-networks-a-beginner-friendly-approach-using-tensorflow-94b0a090ccd4CS231n Convolutional Neural Networks for Visual Recognition. http://cs231n.github.io/convolutional-networks/Image Segmentation. https://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/ImageProcessing-html/topic3.htmFully Convolutional Networks (FCN) for 2D segmentation. http://deeplearning.net/tutorial/fcn_2D_segm.htmlKeras Documentation. https://keras.io/

Author: admin

x

Hi!
I'm Mia!

Don't know how to start your paper? Worry no more! Get professional writing assistance from me.

Check it out