Gradient descent is an optimization algorithm used to minimize the cost or loss function in machine learning models. It iteratively adjusts the model's parameters by taking steps proportional to the negative gradient of the cost function. The algorithm continues to update the parameters until convergence is reached or a maximum number of iterations is reached.
Example code for gradient descent in linear regression:
# Assuming X and y are the feature matrix and target vector, respectively
# Initialize parameters
theta = np.zeros(X.shape[1])
# Define learning rate and number of iterations
learning_rate = 0.01
num_iterations = 1000
# Perform gradient descent
for i in range(num_iterations):
# Calculate predictions
predictions = np.dot(X, theta)
# Calculate error
error = predictions - y
# Calculate gradient
gradient = np.dot(X.T, error) / len(y)
# Update parameters
theta -= learning_rate * gradient