Dynamic programming has been a popular problem-solving technique since its existence. This method is powerful and efficient. Yet, it’s not easy to master. Dynamic programming is one of the most powerful skills in a coding interview. Why? Because it is a complicated algorithm with various types of DP problems. It’s hard to find the pattern and approach to solving each. From AlgoMonster, I will explain more about dynamic programming.
Definition of the dynamic programming
What is dynamic programming? How can we explain it? In fact, this technique is not well-defined. Some people describe it as an algorithmic technique, which is usually based upon a starting state for the problem and a recurrent formula between successive states. The state of the problem is usually a sub-solution. A partial solution, or a solution that is based on a subset. The states are then built one at a time, using the information from the previous states.
Pre-requisites to Identify a dynamic programming problem
There are two features of the DP problems.
- Overlapping subproblems
You can break down the given problem into multiple subproblems that repeat. Or you can use a recursive algorithm to solve the problem. Then it is what we call having overlapping subproblems.
- Optimal substructure
If the optimal solution to a given problem can be achieved by optimal subproblem solutions, then that problem has optimal substructure property.
Two ways of DP
Top-down is a way to deal with problems naturally: break them down into smaller parts and recursively search for solutions. Then, you save or store the answers. And you can solve the same sub-part without having to recalculate it. Because you can reuse them.
The reverse way is bottom-up. In other words, you start with the smallest and end with the largest. If you want to be a president in a country, for example, you must start as a mayor. Next, you will promote the rule of the state. Later, you will aim to become vice president or president. You’ll continue to move from smaller to bigger steps. You will eventually reach your goal.
The Method of memorization
Some programming languages let you automatically take note of the results of function calls that are made with specific arguments. This term is “call-by-need”. You can achieve this in certain languages, e.g. Common Lisp. Some languages, such as tabled Prolog or J, include automatic memoization. This allows memoization with the M. adjective. This only applies to referentially transparent functions. Memoization is also possible in term-rewrite languages such as Wolfram Language. Also, progrgammers use it as a design pattern.
Benefits of dynamic programming
Dynamic programming offers many benefits when solving problems.
Execution is efficient
DP is a simplified way to implement a program. Dynamic programming eliminates the need to lookup/add sub-problems before the start of the program. It also ensures that execution is only one step ahead. It is not necessary to wait for data to be updated. There are no consequences to a wrong addition and there is no need to verify that a deleted variable has been removed.
Solutions are optimal
Dynamic programming is also beneficial when it comes to designing or finding the optimal solution for a problem. Dynamic Programming can be used to generate the best solution in situations where there isn’t an optimal solution. And this can be used to create more complicated algorithms and more complex systems. D programming allows the programmer to use the created optimal solution immediately. This is in contrast with traditional algorithms that require users to verify whether their desired output can be achieved.
It reduces time complexity
Dynamic programming also offers optimization for time complexity. Because of that, programmers can reduce the number of calls to other programs using dynamic programming techniques. This can reduce the overall program’s time complexity and even impact the overall performance. This reduces the complexity of the program, which results in faster execution times and better efficiency ratings. This can have a significant effect on the overall programming time as well as the performance of applications with a lot of calculations.
DP and greedy algorithms
A common way to find the optimal solution is using the greedy algorithm. And this algorithm breaks down the process into several steps. Every step applies the greedy principle of choosing the best local solution for the current state. It hopes that the final stack result will also be optimal. It is like a greedy person who seeks the best in all situations. He doesn’t see the bigger picture and can’t see the long-term. Neither does he see the whole picture. Or he even has in what it will look like at the end. He seeks to maximize his immediate benefits.
Dynamic programming and divide and conquer
Politics and sociology are familiar with the term “Divide and Conquer”, originally used by Roman rulers as “Divide et Impera”. This strategy divides elements into sub-elements to influence strength or difficulty.
A popular problem-solving algorithm is dynamic programming. People use it to find the solution. This is like divide and conquer because it breaks down the problem into smaller parts. In dynamic programming, subproblems are interdependent. This is the main difference between the two. To reduce their running times, we save the sub-problems’ outcomes in future references to similar or overlapping subproblems.
The chart below shows the overall comparison:
10 algorithms that use dynamic programming
- Floyd’s all-pairs shortest path algorithm
- Optimizing the order for chain matrix multiplication
- Recursive least-squares method
- Beat tracking in music information retrieval
- Adaptive-critic training strategy for artificial neural networks
- Stereo algorithms for solving the correspondence problem used in stereo vision
- Seam carving (content-aware image resizing)
- The Bellman-Ford algorithm for finding the shortest distance in a graph
- Some approximate solution methods for the linear search problem
- Kadane’s algorithm for the maximum subarray problem
Dynamic programming is probably the most significant way to master algorithm competitions. And the key to it is through practice. Find as many as common DP problems you can and then practice. Also, you can check algo.monster out for more professional information.