Dynamic programming is a technique to optimize solutions for problems which have a structure that involves repeated, identical computations.

For dynamic programming to work, it’s essential that the problem can be **broken into overlapping sub-problems** which combine into a solution for the original problem.

Dynamic programming requires us to store the answers to sub-problems so they can be retrieved as components to larger solutions. We memoize the solutions by storing them inside a data structure with efficient look-up.

While dynamic programming can be tricky to master, it’s an essential tool for optimizing many useful algorithms. The key idea is to break the problem down, solve each sub-problem, and combine the sub-answers.

The difficulty lies in how the problem is broken apart, how the sub-answers are stored, and how they’re used to solve the original problem.

For Fibonacci, we stored the previous numbers so we did not need to recompute them.

For Knapsack, we stored which items gave us the best value for every size knapsack. Then we could efficiently retrieve the best option whenever adding a new item.