Asymptotic Notations simplified | Big Oh | Understanding time and space complexity | Efficiency

Опубликовано: 01 Ноябрь 2022
на канале: Shaheer Shukur
83
4

#AsymptoticNotations | #BigOh | Understanding time and space #complexity | Efficiency

When we write a program, we must make sure that the program is efficient. i.e., the program must take the least time to complete , and also must consume the least memory space.
This time complexity and space complexity is dependent on only one thing, what the programmer writes in the code. So, it is the sole responsibility of the programmer to take care of the time complexity and space complexity.
Now, how do we measure the time and space consumed by the program?
One way of doing it is, to run the program, and note down the time and space consumed.
But when we run this program in other devices, based on processor speed and availability of resources, there can be changes in time and space.

Here comes the importance of Asymptotic analysis and notations.
Now, instead of calculating the absolute value of time and space, we will use a mathematical notation to represent the usage of time and memory space.
We will take a graph in which, x axis represents the input size and y axis represents the time consumed or the space consumed.
In this graph, we can represent the time taken by the program in the best case scenario, worst case scenario and average case scenario. But most of the time, we are interested in the worst case scenario, because if the worst case scenario is taken care of, we need not have to be concerned about the best case and average case. In asymptotic notation, best case is called Big omega, average case is called Big theta and the worst case is called Big O. So, we need to focus on Big O.

Now, lets take a program. The functionality of this program is to return the sum of first and last element of this array. Since we can directly access the first and last elements of the array irrespective of the size of array. this program runs in constant time, i.e., its time complexity is Big O(1).
So the graph of Big O(1) looks like this.

Now lets take another program. The functionality of this program is linear search. i.e., to search for a given element in the array. Here in the worst case scenario, the element we are searching for can be at the end of the array. So it has to go through n elements to finally find the element. This program runs in linear time, i.e., its time complexity is Big O(n).
So the graph of Big O(n) looks like this.

Now lets take another program. The functionality of this program is binary search. ie in a sorted array, we need to search for an element. since the array is sorted, we can always search for the element in the middle of the array, and if not found there, depending on the value we are searching for, we can advance our search towards the left or right of the array. So what's happening here is, after each iteration, we are halving the array into two parts. In the first iteration, the array under consideration is of size n. In the second iteration, the array under consideration is of size n/2. In the third iteration, this n/2 will again be divided by 2. i.e., n/2^2. likewise, in the kth iteration, when we have finally found that one element we have been looking for, the array under consideration is of size n/2^k. i.e., the size of array, 1 = n/2^k, which implies, n = 2^k. we get log n = log 2^k, which can be written as log n = k log 2. so, in the worst case scenario, the kth iteration is log n. so the time complexity is written as Big O(log n), and its graph looks like this.

Now lets take another program. the functionality of this program is to find all possible ways of selecting the fruits one after the other from the array. i.e., we need to find the permutations of selecting the fruits. If there are n fruits, then there will be n! ways of selecting the fruits. The Big O time complexity of this program is Big O(n!), and its graph looks like this.

If the complexities are in the green area, we can say the program is efficient, if the complexities are in the orange area, we can say the program not that much efficient but acceptable, and if the complexities are in the red area, then the program is not efficient.
Also, depending on the application of the program, we can decide whether we need better time complexity or better space complexity. In most of the cases, it is not practical to attain efficiency in both time and space. so, what developers use to do is, attain efficiency in time complexity by sacrificing space complexity. or attaining efficiency in space complexity by sacrificing time complexity.

If there are multiple terms in a program, always keep the highest order term, and ignore the rest. i.e., we need to pick the worst among all.
If there is a constant multiplier with the input variable, we can ignore that constant, because the input size decides the variation in complexity graph.
If there is a constant added to the input variable, we can ignore that constant, because when the input variable becomes large, that constant becomes very small and negligible.