Lukas' Notes

194.025 Introduction to Machine Learning

Jan 29, 20262 min read

  • Machine Learning
  • Learning Paradigms:
    • Supervised Learning
      • Classification
      • Regression
    • Unsupervised Learning
    • Semi-Supervised Learning
    • Self-Supervised Learning
    • Reinforcement Learning
    • Transfer Learning
    • Active Learning
    • Passive Learning
    • Online Learning
    • Batch Learning
    • Representation Learning
  • Learning Methods:
    • Distance-based Learning
      • Manhattan Distance
      • Euclidean Distance
      • Minkowski Distance
      • Levenshtein Distance
      • Cosine Similarity
    • Linear Regression
    • Polynomial Regression
    • Multiple Polynomial Regression
    • Linear Classification
      • Binary Linear Classifier
      • Perceptron
      • Kernel Perceptron
      • Logistic Regression
      • Support Vector Machine (SVM)
    • Decision Trees
    • Bayesian Learning
    • Multiclass Classification
  • Kernel Methods:
    • Kernel Function
    • Gram Matrix
    • Hinge Loss
    • Common Kernels:
      • Linear Kernel
      • Polynomial Kernel
      • Gaussian RBF Kernel
    • Ensemble Learning
      • Bagging
      • Boosting
    • Artificial Neural Networks (ANN)
      • Multi-Layer Perceptron (MLP)
      • Deep Neural Networks
      • Activation Functions
        • Sigmoid
        • tanh
        • Rectified Linear Unit (ReLU)
      • Backpropagation
      • Cross-Entropy Loss
      • Universal Function Approximation
      • Transformers
  • Dimensionality Reduction:
    • Dimensionality Reduction
  • Risk and Evaluation:
    • ML Evaluation
    • Empirical Risk
    • True Risk
    • Underfitting
    • Overfitting
    • Bias-Variance Tradeoff
    • Dataset Splitting
    • k-fold Cross-Validation
    • Baselines
    • Student-t Test
    • Metrics:
      • MAE
      • RMSE
      • Confusion Matrix
      • Accuracy
      • Precision
      • Sensitivity
      • F1-Score
  • Probabilistic Machine Learning:
    • Bayes Optimal Classifier
    • Discriminative Learning
    • Generative Learning
    • Inference and Estimation:
      • Maximum Likelihood Estimation (MLE)
      • Maximum a Posteriori Estimation (MAP)
      • Posterior Predictive Distribution
    • Probabilistic Graphical Models:
      • Bayesian Network
      • Naive Bayes
  • Optimisation:
    • Gradient Descent
    • Gradient Checking
    • Normal Equations
    • Regularisation
      • L1 Regularisation
      • L2 Regularisation
    • Hyperparameters
    • Hyperparameter Tuning
  • Fundamental Assumptions:
    • I.I.D. Assumption
    • Realisability Assumption
    • Cluster Assumption
    • Manifold Assumption
  • Learning Theory:
    • Hypothesis Class
    • Consistent Hypothesis
    • PAC Learning
    • PAC-Learnable Class
    • Sample Complexity
    • Generalisation Bound
    • Rademacher Complexity
    • Shattering
    • VC Dimension
  • Mathematical Foundations:
    • Eigenvalue
    • Eigenvector
    • Convex Function
    • Basis Function
    • Lagrange Multipliers
    • Dual Problem
    • KL Divergence
  • Graph Representations:
    • Adjacency List
    • Adjacency Matrix
    • Adjacency Set
    • Weisfeiler-Leman Algorithm
    • Weisfeiler-Leman Graph Kernel
  • Bias and Fairness:
    • Ethical AI
  • Data Analysis:
    • Data Modality
    • Exploratory Data Analysis (EDA)
    • Feature Engineering
  • Data Preprocessing:
    • Independent Feature Scaling
    • Min-Max Scaling
    • Mean Normalisation
    • Standardisation
    • One-Hot Encoding
    • Imputation
  • Data Types:
    • Numerical Data
    • Categorical Data
      • Nominal Data
      • Ordinal Data
    • Image Data
    • Text Data

Graph View

Backlinks

  • TU Vienna

Created with Quartz v4.4.0 © 2026

  • GitHub