Insertion Sorting Linked Lists

In yesterdays post I covered implementing selection sort for linked lists, so I figured I would cover Insertion sorting linked lists for the sake of completeness. I ended the selection sort article with assertion that for the complexity of implementing it compared to the performance of the actual algorithm that you are much better off just implementing a mergesort for your linked list. I stand by that assertion for insertion sort as well, though with insertion sort it has a few things going for it that selection sort does not. Through out this article you will find me comparing the two algorithms, so if you haven’t yet read my previous post on selection sorting linked lists, I’d suggest giving it a look.

You go high, I’ll go low

I have always though of Insertion sort and Selection sort as different sides of the same coin. What I mean by this, is that while their operations are fundamentally different, their processes are very similar: They both partition the list to maintain a sorted portion an an unsorted portion, they they diverge on which part they want to try and optimize.

Selection sort optimized for the lowest number of overall element movement. Insertion sort optimized for fewer overall element comparisons. Neither one does a particularly great job, but insertion sort tends to run faster than selection sort.

You can think of insertion sort, like selection sort, being comprised of two phases:

  1. Obtaining the next value to be placed in its sorted position, and
  2. Placing that element in it’s sorted place.

Selection sort does the bulk of its work in phase 1, and phase 2 is a simple O(1) operation. Insertion sort on the other hand performs phase 1 with an O(1) operation, while doing the bulk of it’s work in phase 2.

Top Down or Bottom Up?

As linked list algorithms tend to be, the recursive implementation is the far simpler approach compared to a purely iterative implementation. The concern with using recursive algorithms on linked lists of unknown size is that we run the risk of running out of stack space, and for that reason having a non-recursive implementation that doesn’t require additional memory or extra traversals of the list is an advantage not available to us with selection sort.

Regardless of whether we use iteration or recursion for the insertion phase, the overall algorithm remains the same: created a new empty list named “sorted”. While the unsorted list still has nodes, remove the first node from the head of the input list and insert that node into its sorted position in the sorted list. When there are no more nodes left on the input list, return the sorted list.

struct node {
    int info;
    node* next;
    node(int i = 0, node* n = nullptr) {
        info = i;
        next = n;
    }
};
typedef node* nodeptr;

nodeptr insertionsort(nodeptr h) {
    nodeptr sorted = nullptr;
    while (h != nullptr) {
        nodeptr toIns = h;
        h = h->next;
        toIns->next = nullptr;
        sorted = insert(sorted, toIns);
    }
    return sorted;
}

As I mentioned above, the recursive sorted insertion algorithm is both straight forward and elegant, and for that reason I’ve included it for the case when you know the length of the list to be sorted wont exceed available stack space:

nodeptr insertR(nodeptr list, nodeptr toIns) {
    if (list == nullptr)
        return toIns;
    if (toIns->info > list->info) {
        list->next = insertR(list->next, toIns);
    } else {
        toIns->next = list;
        list = toIns;
    }
    return list;
}

To perform the same operation without recursion takes a little bit more thought, and the recursive implementation is a good place to begin the general thought process. If it’s the first element to be added to the sorted list, we simply set that node to the head of the sorted list and return. Otherwise, we perform a traversal over the sorted list, checking whether we should insert the node there or not. Once we’ve found the place to insert the node, we set the appropriate pointers and return the sorted list.

nodeptr insert(nodeptr list, nodeptr toIns) {
    if (list == nullptr)
        return toIns;
    nodeptr c = list, p = c;
    while (c && toIns->info > c->info) {
        p = c; c = c->next;
    }
    if (p == c) {
        toIns->next = list;
        list = toIns;
    } else {
        toIns->next = c;
        p->next = toIns;
    }
    return list;
}

The properties that make the recursive implementation simple must be explicitly implemented in the iterative version to make it possible. The recursive implementation also “hides” the setting of the node pointing to the node in its inserted spot, in the iterative implementation above, a “previous” pointer (p) is used to keep explicit track of the previous node during iteration. We will also need to perform checks for null that the base case neatly handled for us in the recursive version.

As I said at the end of the post on selection sort, and at the beginning of this one it is plain to see that the effort required to implement either one is every bit as much implementing merge sort, which is an O(nlogn) algorithm where both insertion and selection sort are O(n^2) algorithms.

Until next time, Happy Hacking!