Update for 26-02-22 22:45

This commit is contained in:
Tyler Perkins 2022-02-26 22:45:01 -05:00
parent 9620f6f611
commit 5494b65eea
4 changed files with 24 additions and 2 deletions

View File

@ -56,6 +56,7 @@ Different ways to store and operate on data, with differing efficiency
* [[randomized_algorithm|Randomized Algorithms]]
* [[genetic|Genetic Algorithms]]
* [[swarm|Swarm Inteligence]]
* [[machine_learning|Machine Learning]]
* [[neural|Neural Networks]]
== Common operations ==

View File

@ -0,0 +1 @@
= Divide and conquer =

View File

@ -8,11 +8,28 @@ have the following properties
2) An optimal solution can be formed from optimal solutions to the overlapping
subproblems derrived from the original problem.
There are two ways to implement a DP algo
Generally, problems requiring a DP algo ask for the optimum value (min/max), or
the number of ways to do something *AND* future/larger descisions depend on
earlier descisions. If they do not depend on earlier descisions, then consider
a [[greedy_algorithm]].
There are two ways to implement a DP algo:
== Implimentation ==
=== Bottom up (Tabulation) ===
For tabulation we start at the lowest sub problem and work our way up to the
desired solution.
desired solution. Generally bottom up is the fastest way to implement a dp
algo, as there is no overhead of recursion.
Usually implemented as iteration of the problem space
=== Top down (Memoization) ===
For memoization we start at the problem we would like to solve then descend
into the lower and lower subproblems, using a system to store the results of
our compuations as we descend. This is to ensure we do not do any unnecassy
computations. T
Usually implemented as recurssion

View File

@ -0,0 +1,3 @@
= Machine Learning =
Machine learning