Peakfinding with Matrix Multiplication

October 29, 2018

A coworker of mine got really into learning algorithms, and shared with me about how the peak finding algorithm is O( n log n).

The problem is stated as follows:

Given a list of numbers, for example: [ 1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4] find the peak number. A peak number is defined as a number where the one that comes before it and after it are less than the current value.

For example, 5, 10 and 4 are peaks.

You can do a simple linear search, or even do a binary search.

A linear search will take O(n) time as you have to traverse through the array.

If you’re looking one peak, then you can use binary search for O(log n). But if you have more than one peak, you have to run binary search over and over.

I think this problem becomes kind of challenging if you try to solve it with binary search, given that you want to mark the already found peaks.

You have to assume a lot of things about the data. For example, one way decide to put a dummy marker for all the indexes, i.e. -1, if all the values are positive. But this could lead to a drastic error.

One strategy to do this is to mark both left, right, and the value of the peak.

Another strategy is to keep a list of all the indices that are peaks so far, and pass them down on each iteration call.

I mean, you have to know whether or not you are going to require multiple peaks. If you are sure that there’s going to be more than one peak, then you should go with a linear peak finder. However, if 1st peak that you are looking for is at the end of the array, you’re kind of screwed. On the other hand, if you just use binary search, then you’re increasing the complexity since it’s O(n log n).

These are all extraenous details! What I REALLY wanted to find out is whether or not you can do this in matrix algebra.

And it turns out that you can! If you take this problem, you can turn it into a linear algebra problem.

Suppose you have an original array: [ 1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

If you shift every value to the left, and append a -1, you get:

[ 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4, -1]

Shift everything to the right, and you get:

[ -1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

Combine all these matrices together, and you get a 3 x n matrix:

[ 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4, -1]

[ 1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

[ -1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

Now, what you need to do is transpose this matrix, and multiply the matrix by a [ -1, 1, -1] kernel.

My burning question is: Will this be faster than direct left and right comparisons? My reasoning is that modern processors have SSE instruction sets which were heavily optimized for linear algebra. I wanted to use numpy, which I believe does use optimizations for SSE and linear algebra.

I had a kernel for [-1, 1] and [1, -1] for the top and middle row of data, and button and middle row of data. I applied those kernels, which gave me two matrices. I then multiplied those two matrices together, and if the number was positive, then there was a peak.

    def peak_finder_binary(ignore_idx, array, beg, end):
        
        if end - beg <= 1:
            return -1
        mid = (beg + end) /2 
        mid_val = array[mid]
        if mid not in ignore_idx: 
            if mid -1 < 0 and array[mid+1] < array[mid]:
                return mid
            if mid + 1 >= len(array) and array[mid-1] < array[mid]:
                return mid
            if mid -1 < 0 or mid+1 >= len(array):
                print("herro")
                return -1
            if array[mid -1] < mid_val and array[mid+1] < mid_val and mid not in ignore_idx:
                return mid
        left = beg
        right = end
        left_peak = peak_finder_binary(ignore_idx, array, left, mid)
        if left_peak >=0:
            return left_peak
        right_peak =  peak_finder_binary(ignore_idx, array, mid, right)
        if right_peak >=0:
            return right_peak
        return -1
        
        
    test_1 = [10, 20, 15, 2, 23, 90, 67]
    ignore_idx = []
    peak_idx = peak_finder_binary(ignore_idx, test_1, 0, len(test_1) )
    while(peak_idx != -1):
        peak_idx = peak_finder_binary(ignore_idx, test_1, 0, len(test_1) )
        if(peak_idx > 0):
            print(test_1[peak_idx])
        ignore_idx.append(peak_idx)
       
    def peak_finder_linear(array):
        idxes = []
        start = time.time()
        for i in xrange(1, len(array)-1, 1):
            if array[i-1] < array[i] and array[i+1] < array[i]:
                idxes.append(i)
        end = time.time()
        print end - start
    import numpy as np
    import copy
    import time
    def peak_finder_matrix_algebra(array):
        
        left = copy.copy(array)
        right = copy.copy(array)
        mid = copy.copy(array)
        left.pop(0)
        left.append(-1)
        right.pop()
        right.insert(0, -1)
        
        mtx_left = np.matrix([np.array(left), np.array(mid)], dtype=np.int32)
        mtx_left = mtx_left.transpose()
        kernel_left = np.array([[-1], [1]])
        mtx_right = np.matrix([np.array(mid), np.array(right)], dtype=np.int32)
        mtx_right = mtx_right.transpose()
        kernel_right = np.array([[1], [-1]])
        start = time.time()
        rst_left = np.matmul(mtx_left, kernel_left)
        rst_right = np.matmul(mtx_right, kernel_right)
        rst_left = rst_left>0
        rst_right = rst_right > 0
        rst = np.multiply(rst_left,rst_right)
        end = time.time()
        print end - start
    #Questions: Will the values be unique? Will the values be negative? Will there be repeats?
    #Will there be memory constraints? 
    #When solving problems, the assumptions must be crystal clearT
    #Will we have negative numbers?
    #Array is a terrible choice for a parameter name...
    import random
    my_randoms = [random.randint(0, 100000000) for r in xrange(1000000)]
    #print(my_randoms)
    peak_finder_matrix_algebra(my_randoms)
    peak_finder_linear(my_randoms)
    # herro
    # 0.0135238170624
    # 0.0769588947296

Unless the range of numbers is huge, then you won’t get a performance boost. it’s better to just loop through once to find the peaks. But if there’s a variability in the range of numbers, or maybe if the numbers are floating point, I think this is worth a shot. I didn’t time the data copying, because theoretically, you should be able to point the 1st element of the top and bottom joined number array to the original middle array shifted by a certain number.

All of this tell us that - there’s so much going on under the HOOD!!!