SUPERVISOR . iv(Abstract)IntroductionPart I – Theoretical Framework2.1. Part

SUPERVISOR CANDIDATEDr. Giuseppe Brandi                     Camilla Sarang RetturaACADEMIC YEAR 2016-2017Table of ContentsChapter Page Table of Contents .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .   iAcknowledgements .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .   iiList of Figures .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  iiiList of Tables .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . iv(Abstract)IntroductionPart I – Theoretical Framework2.1.  Part II – Experimental Framework 3.1. Exploratory Data Analysis 3.2. Data Preprocessing 3.3. Splitting 3.4. Building the CNNResultsConclusionReferences Introduction The rise in the utilization of machine learning methods and tools has already been demonstrated by significant applications and implications in today’s world, continuously revolutionizing processes that many of us acknowledge and utilize in our everyday lives (Watkins, 2016). This arguably quasi-mainstream use of machine learning has also been spurred by the accelerated growth of ‘big data’ and the need to process, analyze, and extract insight from it in a timely and efficient manner (Rayner, 2016: abstract).One of the leading trends in the last decade of machine learning has been the upswing of deep learning methods to tackle tasks via the implementation of artificial neural networks. Deep learning algorithms are inspired by the characteristics of the human neocortex through which the human learning process is spurred by sensory information passed through an intricate echelon of layers and subsequently learned by means of the apperception of patterns and recurrences (Arel, Rose, Karnowski, 2010: p.13). The term is best explained by Najafabadi et al.: “deep learning algorithms”, unlike traditional machine learning algorithms, “extract high-level, complex abstractions as data representations through a hierarchical learning process” whereby these “complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy” (Najafabadi, Villanustre, Khoshgoftaar, Seliya, Wald, Muharemagic, 2015: p.1). The application of deep learning methods is pervasive in almost all modern industries, from consumer electronics, such as the rise of “smart” assistants ranging from Amazon’s Alexa to Hello Google, to breakthroughs in medical research including the identification of diseases and personalized medicine (NVIDIA, 2016)( Lemley, Bazrafkan, Corcoran, 2017). One of the many reasons for the extensiveness of deep learning is the ability to process large amounts of unstructured data such as images, audio, and text. Image classification, also known as image recognition, with deep neural networks has been an important determinant for the significant advancements in the field of computer vision, permitting researchers to progress on projects such as self-driving cars or facial recognition software (QUOTE). Whereas humans have little to no trouble in distinguishing different objects their surrounding environment, this task is more difficult for machines. The purpose of this thesis to provide the reader a comprehensive albeit simplified explanation of the processes behind image classification, highlighted with a practical experiment outlying a potential method for the classification of dog breeds from the Stanford Dogs dataset (Khosla, Jayadevaprakash, Yao and Fei-Fei, 2011) The author will initially … TBCPart I – Theoretical FrameworkMCP neuron and Perceptron Learning RuleIn 1950, Alan Turing proposed several criterions to assess whether a machine could be deembed intellingentThe formal conception of the deep artificial neural networks, or rather, the modelling of algorithms inspired on human neurons, can be attributed to the work of McCulloch and Pitts in 1943. Neurons receive stimuliNo discourse on deep learning can commence without mentioning the work of McCulloch and Pitts (1943) whereby the authors ….In 19–, Frank Rosenblatt’s perceptron learning rule was published The revival of neural nets, after a so-called the “AI winter” of the 1970’s was spurred by the work of —-. However, the computing power at that time caused a limitation of the results CNN and RNNFrom single-layer to multi-layer neural networksConvolutional neural networksWhat is image classification? Human and computer parallelismInput and outputWhat is a convolutional neural network? (Hubel and Wiesel 1962)Structure of a CNNLayers and mathTraining a CNN (backpropagation)Hintons argument against convolutional neural networks (frankensteins)Part two – Experimental framework 2.1. Exploratory Data Analysis The dataset utilized comes from the Stanford Dogs dataset, a subset of the ImageNet dataset (Khosla, Jayadevaprakash, Yao and Fei-Fei, 2011). The number of objects in the training and test data amount to 10222 and 10357, respectively. The figure below tabulates the Upon initial inspection, it was also  noted that the images come in different shapes and sizes . Due to the author’s utilization of convolutional neural networks this problem can be abated due to the existence of translation invariance. However, for the sake of simplicity the images can be standardized for ease of processing.2.2. Data PreprocessingResultsConclusionsBibliographyWatkins, C., Machine Learning Is Everywhere: Netflix, Personalized Medicine, and Fraud Prevention, Udacity, 2016, A., The rise of machine learning for big data analytics, Science in Information Technology (ICSITech), 2016, I., Rose, D., Karnowski, T., Deep Machine Learning – A New Frontier in Artificial Intelligence Research, IEE Computational Intelligence Magazine, 2010NVIDIA., Deep Learning in Medicine, 2016 Sustainability Report, 2016, J., Bazrafkan, S., Corcoran, P., Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision, IEEE Consumer Electronics Magazine, 2017  Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization.First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.