2022-02-27 03:30:01 +00:00
|
|
|
= Dynamic Programming =
|
|
|
|
|
|
|
|
Dynamic Programming is a paradigm where we systematically and efficeiently
|
|
|
|
explore all possible solutions to a problem. Generally, problems suited to DP
|
|
|
|
have the following properties
|
|
|
|
|
|
|
|
1) Problem can be broken down into overlaping subproblems
|
|
|
|
2) An optimal solution can be formed from optimal solutions to the overlapping
|
|
|
|
subproblems derrived from the original problem.
|
|
|
|
|
2022-02-27 03:45:01 +00:00
|
|
|
Generally, problems requiring a DP algo ask for the optimum value (min/max), or
|
|
|
|
the number of ways to do something *AND* future/larger descisions depend on
|
|
|
|
earlier descisions. If they do not depend on earlier descisions, then consider
|
|
|
|
a [[greedy_algorithm]].
|
|
|
|
|
|
|
|
There are two ways to implement a DP algo:
|
2022-02-27 03:30:01 +00:00
|
|
|
|
|
|
|
== Implimentation ==
|
|
|
|
|
|
|
|
=== Bottom up (Tabulation) ===
|
|
|
|
|
|
|
|
For tabulation we start at the lowest sub problem and work our way up to the
|
2022-02-27 03:45:01 +00:00
|
|
|
desired solution. Generally bottom up is the fastest way to implement a dp
|
|
|
|
algo, as there is no overhead of recursion.
|
|
|
|
|
|
|
|
Usually implemented as iteration of the problem space
|
|
|
|
|
|
|
|
=== Top down (Memoization) ===
|
|
|
|
|
|
|
|
For memoization we start at the problem we would like to solve then descend
|
|
|
|
into the lower and lower subproblems, using a system to store the results of
|
|
|
|
our compuations as we descend. This is to ensure we do not do any unnecassy
|
|
|
|
computations. T
|
|
|
|
|
|
|
|
Usually implemented as recurssion
|