Gradient Descent is an optimization algorithm that is applied in the negative part of the gradient, which is repetitive and used to minimize some functions.
Even in machine learning, we use Gradient Descent to improve the parameters of our model. The parameters mentioned here express the coefficients in Linear Regression as well as the weights in Neural Networks.
We can call each of the steps above the learning rate. We progress by increasing our learning rate with each step and we can cover more space with each step. However, since the slope changes constantly, we must also consider the possibility of exceeding the lowest point. In this case, it may cause Overfitting problem.
Overfitting :Overfitting is an issue within machine learning and statistics where a model learns the patterns of a training dataset too well. This happens when a model learns the detail and noise in the training data to the extent that it negatively affects the performance of the model on new data.
Since recalculation is very frequent with a very low learning rate, this algorithm can also proceed safely in the negative gradient direction. A low learning rate is more reliable but it is time consuming to calculate.
It is a function that tests the accuracy of the Machine Learning model designed for the data we have. This function is used to quantify the error between predicted values (y-predict) and expected values (y) and presents it as a single real number. The cost function has its own curve and its own gradients. The slope of this curve tells how to update our parameters to make the model more accurate.
Thank you for reading .