algorithms

Definition

Dynamic Programming

Dynamic programming is an approach to solve complex problems by dividing them into smaller subproblem. The subproblems are solved bottom-up, allowing their optimal solutions to be re-used in other sub-problems.

Optimality

Definition

Bellman's Principle of Optimality

Bellman’s principle of optimality states that an optimal policy (a sequence of decisions) for an optimisation problem has a property that, regardless of the initial state and the first decision, the remaining decisions must constitute an optimal policy for the resulting state.

In simpler words: Dynamic programming leads to an optimal solution if and only if it is constructed from optimal results of subproblems.

Link to original

Memoisation

Memoisation means to save optimal solutions of sub-problems, allowing other sub-problems to -lookup sub-problems that were already computed.

Examples

Fibonacci

It makes sense to solve Fibonacci numbers with dynamic programming since there are overlapping sub-problems. Computing Fibonacci numbers is a recursive problem.

def fib(n: int) -> int:
    dp: list[int] = [0, 1]
    for i in range(2, n + 1):
        dp.append(dp[i - 1] + dp[i - 2])
    return dp[n]

Of course, dp doesn’t have to always be a list. It can be a dict or map or any other access data structure.

Knapsack Problem

Another classic DP-problem is the Knapsack Problem. Greedy algorithms don’t work here and all top-down approaches are comparatively slow.