Nural Networks
How to Nural Networks
Nural networks are used in many fields like classification, prediction and dimensionality reduction, data representation for example in a machine learning application. In this post, I would like to discuss the basic concepts of neural nets as an alternative model for complex algorithms. We will see an overview of some common applications, and the pros and cons of using these systems. So let’s jump right into it!
Neural Nets Basics
In deep learning, we have neural networks which are inspired by animal brains. Our brain is made up of neurons. Each neuron receives signals from other neurons and then creates a signal to either make connections or not to. Neural Networks learn their internal state (the weights and bias) through trial and error. They do so through backpropagation (this process is mentioned later). That is, they take a set of possible numbers as input and try them until they do their best. The results of each trial may be compared with those of another and that is combined with its own values and stored. These values are then used as inputs for further learning. Here is an image at your disposal:
In this case, if the hidden layer has 1000 neurons, the number of neurons from the hidden layer can be multiplied by (1000 * 1)/1000 = 901,000. If the hidden layer has 500 neurons, the number of neurons from the hidden layer can be multiplied by 20000 / 1 = 250,000 = 5000. Similarly, if the hidden layer has 400 neurons, the number of neurons from the hidden layer can be multiplied by 20,000 / 1 = 300,000 = 4000. And if the hidden layer has 100 neurons, the number of neurons from the hidden layer can be multiplied by 40000 / 1 = 800,000 = 9000. And so on…, the numbers go up.
Artificial Nural network Applications — Examples
In deep learning applications, a lot of problems have been solved using neural networks. Deep nural netw Learning algorithms are popular in many areas e.g. computer vision, speech recognition, language translation, robotics, medicine, etc, etc. Deep Learning models may involve different types of neurons — hidden units, hidden layers, output units, residual units etc.
In computer vision, Deep Learning can be applied to solve difficult algorithmic tasks like object detection, semantic segmentation, depth maps, etc.
Speech Recognition applications often work better when fed raw audio. However, in order to receive useful features, computer speakers must be trained. Computer voice recognition needs to convert input words and sounds into numerical representations. One of such algorithms is Hidden Markov Models (HMM), used to predict word sequences. It is an adaptive model, meaning that a new observation (a sequence) that has already been seen by previous observations has no memory from preceding observations. HMM is particularly effective in speech recognition because input words can be ignored or changed while speaking.
In the field of data representation, Neural Networks are widely used for dimensionality reduction in data analysis problems. As dimensionality reduction techniques increase, the quantity of data available grows exponentially and so does the processing demands of these problems.
Machine Learning Applications
There are many applications where AI-powered technologies use NNet work like a summarization of text, finding missing value, anomaly detection, predicting financial markets, etc. Below are a few examples:
Clustering
Clustering is the task of grouping items into groups according to similarity measures, e.g. cosine distance or Euclidean distance. Clustering using NNet — Let us compare the following pictures.
Sophisticated and Unsophisticated Views of Word Clustering Plotting Words as Points
Unsophisticated View of Word Clustering Plotting Words as Points | KNN| LSTM Network| ELSTM Network
Spam Filtering
Spam filtering is a type of spam filtering. There exists many ways to do it. On an extreme end we can create a huge corpus of emails for the sake of the analysis. Also we can just say spam filtering is blocking unwanted mail which contains bad quality content, spam etc. To prevent spam from coming into email, we use Bayesian Filtering. According to the Bayesian theory, every data point in our dataset is equally important in determining the output. To determine whether the message was spam or not can be done by feeding a binary classifier (0 or 1). The classifier is a function F(x,k), which takes k inputs, i.e., x that is the spam or not and returns the class. One class is a spam class and the other is a non-spam class. The probability of getting any class is proportional to the probability value of getting the class. The most commonly used Bayesian Filter is Levenshtein Distance (L2-d) which is a probabilistic measure used to measure how closely a string of characters are related to each other. The formula can be expressed as follows:
P(spam|spam). P(non-spam|spam) P(non-spam|non-spam)
https://en.wikipedia.org/wiki/Bayesian_filtering
Now, let us look at an interesting experiment using Neural networks. Suppose we want to find out how well can detect images. Basically, we have to use a Bayesian Filter to figure out whether the image is good or not. We use L2-d to detect what the image is, and also we have another classifier (0 or 1). Then we compare the original image and the processed image and we find out that the picture that we were looking for was found out to be good. Let us train our model with a 3-dimensional MNIST dataset and see how well a NeuronNet works.
Image Classification| L2-d| NNet| DNN| PCA+DNN| RNN| CNN
Images 1 and 2 are good. As per the code given below here is the code to classify data into three classes: 0, 1. Training Data Set:
Image Training Set
Training and Testing Sets:
Data Preprocessing:
So far we have taken the original data and have separated it into three parts: training set, testing set, test set. Now, we need something to feed into the net. So we use XGBoost. The reason we choose this algorithm is that it trains faster and is capable of doing very good on large datasets. I hope you enjoyed the explanation, and you understood the basics. happay day
0 Comments