AdaBoost This blog post will provide you with a comprehensive overview of Adaboost, exploring the theory behind this probabilistic algorithm and demonstrating its implementation using Python libraries. Dive in to uncover the advantages and disadvantages of neural network, as well as its real-world applications across various domains. With that, enjoy your journey in QDO! What is Adaboost AdaBoost (Adaptive Boosting) is an ensemble learning technique that combines multiple weak classifiers (often decision trees) to create a strong classifier. It works by training the weak classifiers sequentially, giving more weight to misclassified instances at each step so that subsequent classifiers focus more on the harder cases. The final prediction is made by combining the weighted votes of all weak classifiers. AdaBoost is effective at reducing bias and variance, and it’s particularly good for binary classification problems. However, it can be sensitive to noisy data and outliers. Concepts o...
NEURAL NETWORK
This blog post will provide you with a comprehensive overview of neural network, exploring the theory behind this probabilistic algorithm and demonstrating its implementation using Python libraries. Dive in to uncover the advantages and disadvantages of neural network, as well as its real-world applications across various domains. With that, enjoy your journey in QDO!
WHAT IS Neural Network
Neural networks (NN) are a class of machine learning models inspired by the structure and functioning of the human brain. They are designed to recognize patterns and make predictions by processing data through layers of interconnected nodes (also called neurons). Each connection between nodes has a weight that adjusts during training, helping the network learn from the data.
Concept of neural network
Let's say we intend to measure the effectiveness of the dosage upon the patient. The dosage only proves to be effective if its taken at a moderate amount as shown within the graph below.
No matter how we drew the best fit straight line, it is ineffective in differentiating the outcome but a squiggle line as below is able to finish the task effectively.
This is where neural network come into picture. Before that, lets review the overall structure of neural network displayed as below.
Layers of neural network
Neural network consist of 3 layers,- Input layer : the first layer within the neural network, accepts the input from the user
- Hidden layer : the layers between the neural network, assists the neural network in understanding the data, and training the algorithm using the parameters and bias set by the user
- Output layer : the last layer within the neural network, display the output from the user
Now let's apply those layers in our situation
The dosage represents the input layer, in which we accepts the dosage of the neural network from the user while everything in between the dosage and efficacy represents the hidden layers.
Parameters: Represented by the values in the boxes in the figure above, values set by the user to train the algorithm.
Bias: Represented by the values outside the boxes, focusing on scaling the data.
- Input layer : the first layer within the neural network, accepts the input from the user
- Hidden layer : the layers between the neural network, assists the neural network in understanding the data, and training the algorithm using the parameters and bias set by the user
- Output layer : the last layer within the neural network, display the output from the user
Now let's apply those layers in our situation
Activation functions
Each neuron uses an activation function to transform its input signal into an output. In the scenario above, the activation function are represented within the graph displayed above. Common activation functions include:
- Sigmoid : Maps inputs to values between 0 and 1.
- ReLU (Rectified Linear Unit): Outputs 0 for negative inputs and the input value for positive values, making it computationally efficient.
- Tanh: Maps inputs to values between -1 and 1, often used in deep networks.
We then proceed to plot the graph for both curve.
Last but not least, we proceed to add the y-values for both the curve together and add the bias into the algorithm, we are able to find the neural nets that differentiate the effectiveness of the dosage on the user.
Implementation of neural network in python
Importing libraries
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.model_selection import train_test_split
Loading dataset
wine=pd.read_csv('C:/Users/User/Desktop/Dataset_example/winequality-red.csv',sep=',')
Data preprocessing
bins=(2,6.5,8)
group_names=['bad','good']
wine['quality']=pd.cut(wine['quality'],bins=bins,labels=group_names)
wine['quality'].unique()
Label_quality=LabelEncoder()wine['quality']=Label_quality.fit_transform(wine['quality'])
wine['quality']=Label_quality.fit_transform(wine['quality'])
Defining dependent and independent variables
X=wine.drop('quality',axis=1)Y=wine['quality']
X=wine.drop('quality',axis=1)
Y=wine['quality']
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=42)
Scaling the data
sc=StandardScaler()
X_train=sc.fit_transform(x_train)
X_test=sc.fit_transform(x_test)
Applying the model
mlpc=MLPClassifier(hidden_layer_sizes=(11,11,11),max_iter=500)
mlpc.fit(x_train,y_train)
pred_mlpc=mlpc.predict(x_test)
Get prediction result
print(classification_report(y_test,pred_mlpc))
precision recall f1-score support
0 0.88 0.95 0.92 273
1 0.48 0.26 0.33 47
accuracy 0.85 320
macro avg 0.68 0.60 0.62 320
weighted avg 0.82 0.85 0.83 320
print(confusion_matrix(y_test,pred_mlpc))
[[260 13]
[ 35 12]]
Get prediction accuracy
from sklearn.metrics import accuracy_score
cm=accuracy_score(y_test,pred_rfc)
0.9
Parameters that you can tune in neural network
Advantages and disadvantages of neural network
Advantages
- Ability to Learn Complex Patterns: Neural networks can capture complex, non-linear relationships in data, making them highly effective for image recognition, natural language processing, and other tasks with intricate patterns.
- Adaptability: Neural networks can be applied across various types of data (text, images, audio, etc.) and tasks (classification, regression, forecasting), making them versatile tools for different domains.
- Feature Extraction: Deep neural networks can automatically learn important features from raw data, reducing the need for manual feature engineering and often improving model performance.
Disadvantages
- High Computational Cost: Neural networks, especially deep models, require significant computational resources for training and can be time-consuming to train, especially on large datasets.
- Data-Hungry: Neural networks usually need large amounts of labeled data to perform well, which may not be feasible in many real-world situations.
- Interpretability: Neural networks are often described as "black boxes" because their complex structure makes it challenging to interpret how specific predictions are made, which can be a drawback in sensitive applications like healthcare.








Comments
Post a Comment