Skip to main content

Simple Linear Regression

Simple Linear Regression:


Linear Regression: The relationship between variables such that it forms a straight line.
The Relation that defines linearity in the data is called Co-relation.

Co-Relation:
The correlation in three forms i.e
a)Positive correlation
b)Negative correlation
c)No correlation

What is Linear Regression:
It is all about getting the best fit line that supports linearity in the data is called Linear Regression.
                                            "WHAT IS BEST FIT LINE"?
                                           "HOW TO IDENTIFY BEST FIT LINE"?
  • By using the Pearson correlation coefficient -Allows us to create a basic ideal line
  • Using the error function & Gradient Descent to get a line that has the least error possible.
Now the question is how we identify the line is best-fit?
By using an error function("The Function that holds the difference between actual and predicted value").
  • Mean Squared  Error(MSE)
  • Mean Absolute Error(MAE)
  • Root Mean Squared Error(RMSE)
  • R2_score
                               "Which Error function we have to use"
The main intuition of an error function is to calculate the Low error.
Here we are seeing different  error functions which one we have used
  • we can use any error function it's our wish.
  • In terms of computation time use MAE(mean absolute error) because it takes less time.
for example, if we compared MSE with RMSE, Root Mean Squared Error it takes more time because it is squaring the MSE value so it takes more time while compilation.

FORMULAE FOR LINEAR REGRESSION IS 
                                                            "Y=MX+C"
For code, part visit my GitHubclick here








Comments

Popular posts from this blog

Loss Functions | MSE | MAE | RMSE

            Performance Metrics The various metrics used to evaluate the results of the prediction are : Mean Squared Error(MSE) Mean Absolute error(MAE) Root-Mean-Squared-Error(RMSE) Adjusted R² Mean Squared Error: Mean Squared error is one of the most used metrics for regression tasks. MSE is simply the average of the squared difference between the target value and value predicted by the regression model.  As it squares the differences and  penalizes (punish)even a small error which leads to over-estimation of how bad the model is. It is preferred more than other metrics because it is differentiable and hence can be optimized better. in the above formulae, y=actual value and ( yhat) means predicted value by the model. RMSE(Root Mean Squared Error: This is the same as MSE (Mean Squared Error) but the root of the value is considered while determining the accuracy of the model. It is preferred more in some cases because the errors are first...

LSTM

                                                                                  LSTM  we will discuss now each and every stage with the help of the above diagram. State1:Memory View The memory view is responsible for remembering and forget the information based on the context of an input. (you didn't get it, wait now you will understand). In the above diagram, the memory view is the top line.   key points( Ct-1, X, +,  and Ct) . The input is an old memory, X is multiplication which forgets the useless information from the old memory, and " +"   addition lets merge all these things. when we multiply the old memory with '0' the old memory will "0" or if we multiply with vector "1" The old memory won't change. ( what ...

KNN Interview Questions

                           KNN interview questions 1) Which of the following distance metric can not be used in k-NN? A) Euclidean Distance B) Manhatten Distance c) Hamming Distance E) Minkowski Distance F) Jaccard Distance G) All the above Answer:- G All of these distance metric can be used as a distance metric for KNN 2)Knn is for regression or classification? Answer:- Knn is used for both classification and regression problems. 3) When we use Manhatten Distance? Answer:-Manhatten distance is used for continuous variables. 4) You have given the following 2 statements, find which of these options is/are true in case of k-NN? In the case of very large value of k , we may include points from other classes into the neighborhood, so it leads to overfitting. In case of too small value of k the algorithm is very sensitive to noise.(it will affect our model performance). Answer:-The above two points are answers. 5...