Gradient Descent

Interactive optimization of mathematical functions

Speed:

Mathematical Functions

Optimization Parameters

Legend

Function:
Current Position
Optimization Path
Tangent (Derivative)
Step Direction

About Gradient Descent

Algorithm: xₙ₊₁ = xₙ - α·∇f(xₙ)

Goal: Find x that minimizes f(x)

Learning Rate (α): Step size

Convergence: When |∇f(x)| < threshold

Challenges:

• Local minima

• Learning rate too large/small

• Saddle points