Artificial Intelligence: brief introduction to. different techniques in Machine Learning with stress on Supervised Learning.
Humans aren't born with many skills and we need to learn how to sort mail, land airplanes and have friendly conversations. Computer scientists have tried to help computers learn like we do with a process called supervised learning. The process of learning is how anything can make decisions. For example humans, animals or AI systems they can adapt their behavior based on their experiences. There are three main types of learning reinforcement, unsupervised and supervised learning. Supervised learning is the process of learning with training labels. It's the most widely used kind of learning when it comes to AI and it's what we will focus on in this blog.
Supervised learning is when someone who knows the right answer calls a supervisor, points out mistakes during the learning process. You can think of this like when a teacher corrects a student's math is one kind of supervised setting. We want AI to consider data like an image of an animal and classify it with a label like reptile or mammal. AI needs computing power and data to learn and that's especially true for supervised learning which needs a lot of training examples from a supervisor. After training this hypothetical AI it should be able to correctly classify images it hasn't seen before like a picture of a kitten as a mammal. That's how we know it's learning instead of just memorizing answers and supervised learning is a key part of lots of AI we interact with every day.
It's how email accounts can correctly classify a message from your boss as important and adds as spam.
It's how facebook tells your face apart from your friend's face so that it can make tax suggestions when you upload a photo
It's how your bank may decide whether your loan request is approved or not.
Now to initially create this kind of AI computer scientists were loosely inspired by human brains they were mostly interested in cells called neurons because our brains have billions of them each neuron has three basic parts
Cell body
Dendrites
Axon
The axon of one neuron is separated from the dendrites of another neuron by a small gap called a synapse and neurons talk to each other by passing electric signals through synapses as one neuron receives signals from another neuron. The electric energy inside of its cell body builds up until a threshold is crossed, then an electric signal shoots down the axon and is passed to another neuron where everything repeats. The goal of early computer scientists wasn't to mimic a whole brain, their goal was to create one artificial neuron that worked like a real one.
In 1958 a psychologist named Frank Rosenblatt was determined to create an artificial neuron. His goal was to teach this AI to classify images as triangles or not triangles with his supervision. That's what makes it supervised learning. The machine he built was about the size of a grand piano and he called it the perceptron. Rosenblatt wired the perceptron to a four hundred pixel camera which was quite high tech for the time, but is about a billion times less powerful than the one on the back of your modern cellphone. He would show the camera a picture of a triangle or a not triangle like a circle depending on if the camera saw ink or paper in each spot, each pixel would send a different electric signal to the perceptron then the perceptron would add up all the signals that match the triangle shape. If the total charge was above its threshold it would send an electric signal to turn on a light that was an artificial neuron speaking for yes that's a triangle. But if the electric charge was too weak to hit the threshold it wouldn't do anything and the light wouldn't turn on that meant not a triangle. At first the perceptron was basically making random guesses so to turn it with supervision rosenblatt used yes and no buttons if the perceptron was correct, he would push the yes button and nothing would change but if the perceptron was wrong he would push the no button which set off a chain of events that adjusted how much electricity across the synapses and adjusted the machine's threshold levels so it'd be more likely to get the answer correct next time.
Nowadays, rather than building huge machines with switches and lights, we can use modern computers to program AI to behave like neurons. The basic concepts are pretty much the same. First the artificial neuron receives inputs multiplied by different weights which correspond to the strength of each signal in our brains. The electric signals between neurons are all the same size but with computers they can vary. The threshold is represented by a special weight called the bias which can be adjusted to raise or lower the neuron's eagerness to fire so all the inputs are multiplied by their respective weights added together and a mathematical function gets a result. In the simplest AI systems this function is called a step function, which can only output a zero or a one if the sum is less than the bias then the neuron will output a zero which could indicate no triangle or something different depending on the task. But if the sum is greater than the bias then the neuron will output a one which indicates the opposite result. AI can be trained to make simple decisions about anything where you have enough data and supervised labels like triangles, junk mail languages, movie genres or even similar looking foods like doughnuts and bagels.
List of Supervised Learning Algorithms
Supervised machine learning algorithms learn the patterns and different relationships between feature set and output data. These kind of algorithms are defined by there use of labeled data. A labeled data is a dataset that contains a lot of examples of Features and Target. Supervised learning uses algorithms that learn the relationship of Features and Target from the dataset. This process is referred to as Training or Fitting. A bunch of such supervised learning algorithms are given below:
1. Linear Regression
2. Logistic Regression
3. Decision Tree
4. SVM (Support Vector Machine)
5. Naive Bayes
6. kNN (k- Nearest Neighbors)
7. K-Means
8. Random Forest
9. Dimensionality Reduction Algorithms
10. Gradient Boosting Algorithms
Comments