# Dynamic Programming
> [!summary] Dynamic Programming
>
> Dynamic programming applies when he subproblems -- that is, when subproblems
> share subsubproblems.
>
> 1. Characterize the structure of an [[optimization|optimal solution]].
> 2. Recursively define the value of an optimal solution.
> 3. Compute the value of an optimal solution, typically in a bottom-up fashion.
> 4. Construct an optimal solution from computed information.
- When to use dynamic programming?
- Optimal structure: the optimal solution consists of optimal solutions to
subproblems.
- Overlapping subproblems. (otherwise [[greedy|greedy algorithm]])
- Requirements
- Subproblems be _independent_, in that they do not share resources.
- Subproblems be _overlapping_, in that the same set of relatively small
problems are solved again and again.
## Solution
- Top-down with _memoization_ method
- Recursive procedure.
- Store the result of problems upon first encounter.
- Some subproblems may not need to be solved at all.
- Bottom-up method
- Calculate the subproblems in increasing size
- Suitable when there's a clear size ranking in subproblems
- May have smaller constant compared to memoization method due to low
overhead.
- Utilize known structure of subproblems to speed up table access.
- Subproblem graph
- A directed graph representing the dependencies of subproblems and
sub-subproblems
- The complexity is linear in the number of vertices and edges.
- Dynamic programming can be viewed as a [[dfs|depth-first search]] in this
graph.
- To reconstruct, record not only the optimal value, but also the choices made
## Examples
- [[rod-cutting|Rod-cutting problem]]
- [[matrix-chain|Matrix-chain multiplication]]
- [[lcs|Longest-common-subsequence problem]]
- [[optimal-bst|Optimal binary-search tree]]
- [[bitonic-tsp|Bitonic traveling salesman problem]]