Dynamic Fibonacci

If we want to understand the true essence of dynamic programming, we should consider the simple example of fibonacci series. Using the conventional algorithm the time complexity of the solution is O(2^n).

 def fibonacci(n):
    i, a, b = 1, 0, 1
         a, b = b, a+b
         i += 1
    return b

But using the dynamic programming we can reduce the function call, as well as the time complexity to O(n).

class Main {
 int[] f = new int[100];
 public int fibonacci(int n) {
       f[0] = f[1] = 1;
       for(int i=2;i<=n;i++) {
            f[i] = f[i-1] + f[i-2];
 return f[n];

Greedy Algorithm

Greedy approach gives a feasible solution to a problem where multiple solutions are possible. Greedy approach doesn’t give you the optimal solution, but if the feasible solution is optimal, then we can accept that solution.

Let’s consider the coin changing problem, which could be solved using greedy algorithm. Assume that, we are given a set of coins, and a sum to be changed. Then we can do so,

import math

def greedy(coins, target):
 remaining = 0
 count = 0
 i = 0
 times = 0
      if(target is 0):
           times = math.floor(target/coins[i])
           remaining = target - (coins[i]*times)
           target = remaining
           print(coins[i]," needs: ", times," times.")
           i += 1
           count += times
 return count
coins = [10,5,2,1]
print("\ntotal coins needed: ", greedy(coins, 31))
10  needs:  3  times.
5  needs:  0  times.
2  needs:  0  times.
1  needs:  1  times.

total coins needed:  4

As we can see that, we need total 4 coins to make 31 from the list of [10,5,2,1] coins. In this case it is an optimal solution, but if we are given [4,3,1] for 6 then the total number of coins we need would be 3 using the greedy algorithm. Unfortunately, the optimal solution would be 2. So we see that greedy algorithm has some drawbacks, but it could be optimized using the dynamic programming.

Longest Subsequence of a String

Longest Subsequence of a string is the longest chronological sequence of the characters in a string. For example, if “abcdggabcdeggabcdefghifklmabc” is a string then the longest subsequence of that string is, “abcdggabcdeggabcdefghifklmabc”. The following code shows how to find it using java.

public class LongestSubsequence {
    StringBuffer sub_sequ = new StringBuffer();
    List<String> sub_sequences = new ArrayList<>();
    int index = 0;
    public List<String> sequence(String s) {
        for (int i = 0; i < s.length(); i++) {
            for (char j = 'a'; j <= 'z'; j++, i++) {
                if (s.charAt(i) == j) {
                } else {
            sub_sequences.add(index, sub_sequ.toString());
        return sub_sequences;

    public String longestSubsequence(List<String> sub_sequences) {
       return sub_sequences.get(sub_sequences.size()-1);

Quicksort with Lomuto partition scheme

An efficient sorting Algorithm, which follows the divide-and-conquer strategy. Partitioning is the center of attention in this algorithm. Partitioning is the process of dividing, and sorting a list of data. It is done through a selection of pivot. A pivot is the key value, using which the list of data is divided into two major sections in each iteration. But in Quicksort, data is sorted based on the pivot value.

Partitioning: reorder the array so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation. – quicksort

Lomuto partitioning scheme is one of the two well partitioning scheme. Another one is Haore partitioning scheme. In this article we will see how the quicksort can be implemented using the Lomuto partitioning scheme.

def quicksort(A, lo, hi):
 if(lo < hi):
       p = partition(A, lo, hi)
       quicksort(A, lo, p-1)
       quicksort(A, p+1, hi)
 return A

def partition(A, lo, hi):
 pivot = A[hi]
 i = lo
 for j in range(lo, hi):
              temp = A[j]
              A[j] = A[i]
              A[i] = temp
              i = i+1
 temp = A[hi]
 A[hi] = A[i]
 A[i] = temp
 return i 

A = [0,32,11,0,67,34]
print(quicksort(A, 0, len(A)-1))
[0, 0, 11, 32, 34, 67]       

Time complexity :

Worst-case : O(n^2); Average-case: O(nlogn); Best-case: O(nlogn)

Why is “dynamic programming?” & the key notes

Dynamic programming is the most influential paradigm in algorithm. It solves multiple key problems of computing.

Why is dynamic programming?

  • To reduce the function call (improves the time, and space complexity).
  • Breaks the whole problem into sub-problems (:only if the sub problems are similar to the whole problem, i.e. problems like Fibonacci series where each function call does the same thing).
  • Re-using the subproblems.
  • Anything else!?

Key Points for Dynamic Programming:

  • Recursion
  • Memoization (using the sub-solutions)

Further reading : dynamic programming