Median Of Two Sorted Arrays problem

This is a tricky little leetcode problem:
https://leetcode.com/problems/median-of-two-sorted-arrays/

Here are some of my observations on a whiteboard:

There are a lot of fiddly edge cases. I think my solution could be simplified but here is the gist anyway:

https://gist.github.com/adamkorg/ffba283f298e7c2c2b748acb49ccc9f1

Posted in programming | Tagged , , | Leave a comment

Shortest Path – Dijkstra’s algorithm

Here are my whiteboard notes on how to implement the Dijkstra’s algorithm to find the shortest path of a graph:

Here is my simple c++ implementation of this:

The above is a very simple implementation of the algorithm. It is O(V²). We can optimise this to be something like O(E log V) by using a heap, map, or set data structure in the next_vertex_to_visit() function.

The above implementation is also not very object oriented. I felt that putting this functionality into classes would increase the complexity a bit and I wanted this to be as easy to understand as possible. If I was to implement this in production code then I would have a Graph class to encapsulate the logic.

Posted in programming | Tagged , | Leave a comment

Shortest Path – Bellman Ford algorithm

Here are my notes on how to implement the Bellman Ford algorithm to find the shortest path of a graph:

This is along the lines of what Rob Conery describes in The Imposter’s Handbook. And here is my c++ implementation of this:

The above is a vertex oriented walk through the graph, where we visit each vertex and calculate the costs to every outwardly connected vertex. I have found it simpler to just walk through an array of edges, which cuts down some complexity:

To keep this simple, I left out the optimisation to break out of the outer loop when no more updates are made.

Posted in programming | Tagged , | Leave a comment

Fibonacci spiral

Here are my whiteboard plans for drawing the Fibonacci spiral. The main issues were figuring out the x and y positions of each of the squares. I knew that I would need a loop that iterates around North, West, South and East directions anti-clockwise.

Once those x and y positions were figured out, it was easy to code this up in JavaScript:

Here is the JavaScript running to draw the spiral:

Posted in programming | Tagged , | Leave a comment

Maximum Subarray Sum problem

I had to write down an example and work through it to figure this out. Here are my resulting whiteboard notes:

A few different implementations can be found here:

https://hackernoon.com/kadanes-algorithm-explained-50316f4fd8a6

I think Kadane’s algorithm can also be used to solve the problem of finding the maximum profit from buying and selling shares from an array of daily prices:

https://www.geeksforgeeks.org/stock-buy-sell/

I think we can calculate the differences between the daily prices and then treat this as a maximum subarray problem. So when the share price goes down and the total goes below zero, we reset this by setting the count to zero.

Posted in programming | Tagged , | Leave a comment

Nth Fibonacci number matrix solution

It’s easy to implement a 2^n recursive solution and also easy to implement an iterative O(n) solution but implementing an O(log n) solution is trickier. The following has a nice summary of various solutions:

https://www.geeksforgeeks.org/program-for-nth-fibonacci-number/

I just wanted to add some commentary to matrix solution is Method 4 and 5 above. Here are my whiteboard notes/calculations:

So the key thing is to realise that raising the base matrix below to the power of n will give a value for the nth fibonacci number:

One part of the implementation challenge is dealing with matrices and doing multiplication on them. The other part of the challenge is raising the matrices to a power efficiently. A naive power function will result in O(n) but we can optimise that function to O(log n). This is normally done recursively but can be simulated iteratively using binary operators – see section 4 of the following for an implementation:

https://www.techiedelight.com/power-function-implementation-recursive-iterative/

Posted in programming | Tagged , | Leave a comment

Sum of two problem

I have signed up to leetcode. The first problem I did was called “Two Sum”, where you need to find two numbers in an array that when added together match a target number. I thought I’d write some notes here to remind myself about the details in the future.

Brute force solution

So, the most obvious solution is comparing all combinations of each two numbers with two nested for loops. This is easy to code and simple but doesn’t scale well as it is O(n²).

Hash map solution

The next easiest solution for me was to store the numbers in a hash map (in c++ std::unordered_map<>) and use that to find the required complement of each number. This is the most scalable solution as it has O(n) time complexity but also has O(n) space complexity.

Ordered list iterating solution

The solution that didn’t come as easily to me as it probably should have was an algorithm to pass through the numbers in a linear way if they are already sorted. I intuitively knew this was possible but it didn’t come to me until I drew out an example of the numbers on a whiteboard:

Once written down like this, it becomes clear that we can iterate from the start and end of the array heading towards the centre while comparing values. If the sum of the values is bigger than target then we decrement j. If the sum is smaller than target then we increment i. If i and j cross then we don’t have a match.

By the way, I think some of the leetcode test case arrays are not sorted. If we sort the array first then we have an O(n log n) solution, which in theory should not be as fast as the hash map solution. However I found that this was in fact quicker than the hash map solution. I guess the leetcode test cases must be small and possibly mostly sorted. I don’t think there is UI in leetcode to check the submission test cases, so I don’t know for sure.

So the moral of the story is to write out an example even if it is simple.

I have since found this google interview example video that is a nice walkthrough of this problem:

Posted in programming | Tagged , , | Leave a comment

Heap Sort

Time complexity: O(n log n)

Heapsort is O(n log n) even in worst case scenario. It uses a heap data structure (tree). You build the tree by adding each new value to the end and then bubbling that value up if it is smaller that its parent (in the case of a min heap). You then repeatedly pop the root of the tree off to get sorted values. When you pop the root you then need to replace that root element with the last value in the tree. You then repeatedly swap with the smallest child until the child nodes are bigger than this node. Below are my whiteboard notes:

Due to the nature of a binary tree, we can use an array data structure to implement the heap. This works out easier than implementing a tree with node objects. We can use the following to calculate the array indexes:
left child index = i*2+1
right child index = i*2+2
parent index = (i-1)/2

Here is my c++ implementation of a Heap class and using it to do a heap sort:

https://gist.github.com/adamkorg/eb9324203e78617349db0f7eea04fbd2

Posted in programming | Tagged , | Leave a comment

Quick Sort

Time complexity average case: O(n log n)
Time complexity worst case: O(n²)

Quicksort is nlogn most of the time & usually 2 or 3 times quicker than MergeSort but can be n² in some cases, e.g when the numbers are already sorted in my implementation below. It is a divide and conquer algorithm. It uses a pivot to partition numbers smaller and bigger than the pivot on each side of the pivot. There are various ways to choose a pivot including the median item and last item. There are also a huge amount different ways in which you can implement the rest of the logic in the quick sort. In the gist below I have implemented a quick sort algorithm similar to Robert Sedgewick’s in his algorithms book.

https://gist.github.com/adamkorg/c180fc4f07331da04953348369b9c28a

In the above I do a pass with two pointers iterating from left end and right end, switching items bigger than the pivot from left with items smaller than pivot from right. Once the two pointers cross, I then switch the value of the pointer that started on the left with the pivot (right most element). Then recursively do the same on partitions left and right of pivot. Below are my Quicksort whiteboard notes showing the steps and passes:

It is quite easy make an iterative version using a stack data structure, which simulates the recursive call stack:

https://gist.github.com/adamkorg/6bd95d1ed5ceab468b6d525909d3c78b

Posted in programming | Tagged , | Leave a comment

Merge Sort

Time complexity: O(nlogn)

The most common way to implement Merge Sort is with recursion:
https://gist.github.com/adamkorg/99cae1fa4da0bc8431d040c1e8f516fd

This uses divide and conquer. Divide in half repeatedly until we have list of single elements. Then we start comparing those elements to build up a set of pairs. Then we compare pairs to have sets of 4 elements. Repeat comparing/combining lists until we have a single sorted list.
In my code example I have used a mid point m as being the first element of the right hand partition. Most implementations use a mid point m as being the last element of the left hand partition but it doesn’t make a huge difference.

See the following for the sequence of recursive executions:

From: https://www.geeksforgeeks.org/merge-sort/

An alternative is to use an iterative algorithm. It uses similar merging logic but has a different way of arriving at which two subsets to use. The iterative solution starts comparing individual elements, which I call chunk size 1. Then it doubles the chunk size with each merge pass. Something to note is that the remainder needs to be adjusted if it is less than chunk size. Here’s my version:

https://gist.github.com/adamkorg/5a18047fb9ac6e0ddd16c8776e43a195

Posted in programming | Tagged , | Leave a comment