Monday, June 1, 2015

Table of Big-O - Time Complexities of Algorithms

Now as you have learned about the Time complexities of algorithms, their calculation and the Big-O notation. Before, we present you an exhaustive table of various known algorithms with their Big-O notation, we would show you approximate completion time for algorithms.
That if you were to run an implementation of the algorithm on your computer with an input of size N, it would take C*N2 seconds, where C is some constant that doesn’t change with the size of the input but depends on the platform.

Approximate completion time for algorithms, N = 100
O(Log(N)) 10-7 seconds
O(N) 10-6 seconds
O(N*Log(N)) 10-5 seconds
O(N2) 10-4 seconds
O(N6) 3 minutes
O(2N) 1014 years.
O(N!) 10142 years.


Array Sorting Algorithms

Algorithm Time Complexity
Best Average Worst
Quicksort O(n log(n)) O(n log(n)) O(n^2)
Mergesort O(n log(n)) O(n log(n)) O(n log(n))
Timsort O(n) O(n log(n)) O(n log(n))
Heapsort O(n log(n)) O(n log(n)) O(n log(n))
Bubble Sort O(n) O(n^2) O(n^2)
Insertion Sort O(n) O(n^2) O(n^2)
Selection Sort O(n^2) O(n^2) O(n^2)
Shell Sort O(n) O((nlog(n))^2) O((nlog(n))^2)
Bucket Sort O(n+k) O(n+k) O(n^2)
Radix Sort O(nk) O(nk) O(nk)

Data Structure Operations

Data Structure Time Complexity
Average Worst
Access Search Insertion Deletion Access Search Insertion Deletion
Array O(1) O(n) O(n) O(n) O(1) O(n) O(n) O(n)
Stack O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1)
Singly-Linked List O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1)
Doubly-Linked List O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1)
Skip List O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n)
Hash Table - O(1) O(1) O(1) - O(n) O(n) O(n)
Binary Search Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n)
Cartesian Tree - O(log(n)) O(log(n)) O(log(n)) - O(n) O(n) O(n)
B-Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n))
Red-Black Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n))
Splay Tree - O(log(n)) O(log(n)) O(log(n)) - O(log(n)) O(log(n)) O(log(n))
AVL Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n))

Graph Operations

Node / Edge Management Storage Add Vertex Add Edge Remove Vertex Remove Edge Query
Adjacency list O(|V|+|E|) O(1) O(1) O(|V| + |E|) O(|E|) O(|V|)
Incidence list O(|V|+|E|) O(1) O(1) O(|E|) O(|E|) O(|E|)
Adjacency matrix O(|V|^2) O(|V|^2) O(1) O(|V|^2) O(1) O(1)
Incidence matrix O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|E|)

Heap Operations

Type Time Complexity
Heapify Find Max Extract Max Increase Key Insert Delete Merge
Linked List (sorted) - O(1) O(1) O(n) O(n) O(1) O(m+n)
Linked List (unsorted) - O(n) O(n) O(1) O(1) O(1) O(1)
Binary Heap O(n) O(1) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(m+n)
Binomial Heap - O(1) O(log(n)) O(log(n)) O(1) O(log(n)) O(log(n))
Fibonacci Heap - O(1) O(log(n)) O(1) O(1) O(log(n)) O(1)

Friday, May 29, 2015

Runtime Analysis Notations and Time Complexity of Algorithms

Runtime Analysis
There are 2 aspects of writing the algorithms. One is its correctness and other is efficiency. Runtime analysis is to evaluate the efficiency of algorithm that how much resources does it use. Sometimes, it is easy to solve a problem with an algorithm however, if the algorithm is too slow, you are likely to lose the competition.

Time Complexity
Time complexity of an algorithm signifies the total time required by the program to run to completion. The time complexity of algorithms is most commonly expressed using the big O notation.

The time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input. The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity. For example, lets see how we simplify 3N + 2 machine instructions to describe this as just O(N).

Why do we remove the 3 and 2 ?

As we want to know the performance of the algorithm as N becomes large so lets consider the two terms 3N and 2. Now, we calculate the relative influence of these two terms as N becomes large. As large as  a million, so let say N is a million.

Then the first term is 3 million and the second term is only 2. For this reason, we drop all except the largest terms for large N.

So, now we have gone from 3N + 2 to 3N.

Traditionally, we are only interested in performance up to constant factors which means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 3N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.

So 3N becomes just N.

Types of Notations for Time Complexity
  • Big Oh denotes "fewer than or the same as" <expression> iterations.
  • Big Omega denotes "more than or the same as" <expression> iterations.
  • Big Theta denotes "the same as" <expression> iterations.
  • Little Oh denotes "fewer than" <expression> iterations.
  • Little Omega denotes "more than" <expression> iterations.

Understanding Notations of Time Complexity with Example
  • O(expression) is the set of functions that grow slower than or at the same rate as expression.
  • Omega(expression) is the set of functions that grow faster than or at the same rate as expression.
  • Theta(expression) consist of all the functions that lie in both O(expression) and Omega(expression).

Suppose you've calculated that an algorithm takes f(n) operations, where,

f(n) = 3*n^2 + 2*n + 4. // n^2 means square of n

Since this polynomial grows at the same rate as n^2, then you could say that the function f lies in the set Theta(n^2). (It also lies in the sets O(n^2) and Omega(n^2) for the same reason.)

The simplest explanation is, because Theta denotes the same as the expression. Hence, as f(n) grows by a factor of n^2, the time complexity can be best represented as Theta(n^2).

Ashish Jain


Name : Ashish Jain
Age : 31 years.
Hobbies : Problem solving, Photography, Travelling

About me : Currently, I'm working with the World's second largest Banking organization as web developer, helping the organization with my problem solving and development skills. My total work experience tolls to 100+ months and I consider problem solving as the main ingredient in any kind of development (Be it human or machine).

Academic Qualification : Master of Computer Applications, B.Sc. (Mathematics)

Technical Qualification : Microsoft Certified Professional, Microsoft Certified Application (Excel) Specialist

Why this blog : Cocktails like LIIT get me sloshed and so does C# and problem solving. The blog is to treat the hangover ;)

Monday, May 25, 2015

Sumit

Hi Guys,

I am Sumit, 25 years young, loves programming.
I am currently working as a software developer in a respectable organization.  The whole idea behind writing this blog is to improve my programming skills and also to contribute to the community by taking a zero programmer to naive - average and awesome level. I believe that in 21st century, learning programming language is as important as learning mother tongue.

Cheeeeerrssssssss Guys!!!

Keep Rocking...

Programming Basic Mathematics Concepts

Basic Mathematics:


Mathematics is almost synonymous to Computers. Computer word itself comes from the word "Compute", which means to calculate.

In every competitive programming contest, you  will see at-least one problem on basic mathematics. Such problems are easy to crack in very less time with almost 100% accuracy, of course one must have maths concepts clear in one's mind.

So let's start....

In this post, we will learn about Number Theory : Prime Numbers:

What are prime numbers:


A natural number starting from 2: {2, 3, 4, . . .} is considered as a prime if it is only divisible by
1 or itself. The first (and the only even) prime is 2. The next prime numbers are: 3, 5, 7, 11, 13,
17, 19, 23, 29, . . . , and infinitely many more primes.

Good to remember stats:
  • There are 25 primes in the range [0..100]
  • 168 primes in the range [0..1000]
First Program:

Q) To test whether a given natural number is prime or not?  


     Method Signature:  bool isPrime(int n) {}

Soln.)
       i) Naive approach would be is to test by definition of prime number:

            public static bool isPrime(int n)
        {
            bool result = true;
            for (int i = 2; i < n; i++)
            {
                if (n % i == 0)
                {
                    result = false;
                }

            }
            return result;
        }

This approach runs in O(n), in terms of number of divisions. Performance can be improved.

ii) The first improvement is to test if N is divisible by a divisor ∈ [2 . . .√N], i.e. we stop when
the divisor is already greater than √N. This is because if N is divisible by p, then N = p × q. If
q were smaller than p, then q or a prime factor of q would have divided N earlier. This approach runs in O(√N).

iii)  The second improvement is to test if N is divisible by divisor ∈ [3, 5, 7 . . .√N], i.e. we only
test odd numbers up to √N. This is because there is only one even prime number, i.e. number 2,
which can be tested separately. This is O(√N/2), which is also O(√N).

iv) The third improvement is to test if N is divisible by prime divisors ≤ √N. This is because if a prime number X cannot divide N, then there is no point testing whether multiples of X divide N or not. This is faster than O(√N) which is about O(|#primes ≤ √N|). For example, there are 500 odd numbers in [1 . . . Sqrt(1000000)], but there are only 168 primes in the same range. The number of primes ≤ M – denoted by π(M) – is bounded by O(M/(ln(M) − 1)), so the complexity of this prime testing function is about O(√N/ln(√N)).

To implement (iv) solution, we need a fast algorithm to generate the list of prime numbers, which we will learn in my next post and then i will give you the code.

Many Thanks,

Cheers!