In this post you will learn the main difference between quicksort and mergesort. Both are sorting algorithms based on the divide and conquer strategy.
Mergesort
The real work of mergesort is done by the merge operation. Given two sorted sub-arrays, together having a total of n elements, the merge operation uses an auxiliary array of size n to merge them together into a single sorted array in linear time i.e. O(n) in Big Oh notation.
Having this merge operation, mergesort works as follows. Given an array of elements A, it sorts the left and right sub-arrays using mergesort itself, and then merges them together into one single sorted array. The base case is a sub-array of size 1, which is implicitly sorted by definition.
If we analyze the time complexity of mergesort, we will see it is O(n log n) in all cases. That is, the time taken to sort n elements grows proportionally to n log n. Merge sort also needs an extra array of size n for the merge operation. So its space complexity is O(n).
Quicksort
Quicksort is another divide and conquer sorting algorithm, proposed by C. A. R. Hoare. Here, the workhorse is the partition operation. Given an array of n elements, partition takes one element (known as the pivot) and places it in the correct position. That is, the array will not be sorted, but all the elements less than the pivot will be to the left of the pivot, and all the elements greater than the pivot will be to the right. The partition operation can be performed in linear time.
Quicksort then works as follows. First, it performs the partition operation on the input array. Assume the pivot is placed in position p. Now, quicksort sorts the sub-arrays A[0 : p - 1] and A[p + 1 : n - 1], using quicksort itself. Here again, the base case is an array of size 1.
There are various ways of selecting the pivot. The simplest is to just take the left-most element every time as the pivot. But, this leads to a fatal weakness. In this case, if the array is already sorted, the pivot will get placed in the left most position always, and quicksort will be sorting sub-lists of size 0 and n - 1. This leads to a time complexity of O(n2)O(n2), which is no better than bubble sort.
This can be mitigated by choosing a random element as the pivot, or using the median of three elements. In this case, the worst case time of O(n2)O(n2) is extremely unlikely (borderline impossible).
In the best case, when the pivot is always placed in the middle, the time complexity will be O(n log n). Also, if the pivot always gets placed somewhere in the middle 50% (25% - 75%) part of the array, quick sort will take time proportional to O(n log n), even as in the best case.
The biggest advantage of quick sort is that, even though the asymptotic complexities are O(n log n), the constant muliplier (hidden by the Big Oh notation) is much smaller for quicksort, which subsequently is appreciably faster than mergesort in almost all cases.
Regarding space complexity, the space complexity of quicksort is O(log n), taking into account the stack space used for recursion.
Also, quicksort cannot be implemented iteratively, unlike mergesort, where an iterative implementation, sometimes called bottom-up mergesort, is possible.
No comments:
Post a Comment