ML Fundamentals
What You'll Build Here
This module introduces the core concepts behind machine learning. You'll understand how a single neuron thinks, how it learns from mistakes (gradients), and how classical ML models work under the hood.
You can invoke model.fit() without knowing this. But you cannot debug a model that refuses to learn, or optimize an architecture, without understanding these fundamentals.
Estimated Time Impact
20-25 hours total
1. Classical ML Models ("Old School Cool")
Not every problem needs a billion-parameter Transformer. In fact, for tabular data, these "old school" models often win on speed and interpretability.
IBM: What is Machine Learning?
Linear & Logistic Regression
StatQuest: Linear Models
Support Vector Machines & Decision Trees
StatQuest: Decision Trees & Random Forests
StatQuest: Support Vector Machines
Production systems love Random Forests. They are robust, don't need scaled data, and are easy to explain to a boss. "The model failed because feature X was > 5" is easier to sell than "Matrix multiplication said so."
2. Deep Learning Primitives
Andrew Ng: Neural Networks and Deep Learning (Course 1 & 2)
Focus:
- How a neuron computes (Weights + Bias + Activation)
- Forward Propagation (Prediction) vs. Backward Propagation (Learning)
- Loss Functions (How wrong are we?)
- Gradient Descent and how models learn
3. Evaluation Metrics
How do you know if your model works? Accuracy is often a liar.
12 Important Model Evaluation Metrics
Action Items:
- Completed the entire Classification metrics section (Precision, Recall, F1-Score, ROC-AUC, Confusion Matrix)