The problem consist of finding the longest contiguous subarray with a sum less than a given integer
The input data are:
K– the max sum
array– an array of integers
The algorithm I developed:
head = tail = 0 - while the tail is less than the length of the array - increment the tail and update the sum - if the sum go over the K, increment the head until the sum is less than K - while incrementing the tail check if we have a new max_length
The Python code:
def length_max_subarray(array, K): head, tail = 0, 0 length = 0 current_sum = 0 while(tail<len(array)): if current_sum + array[tail]<=K: current_sum += array[tail] tail+=1 if tail-head > length: length = tail-head else: current_sum -= array[head] head+=1 return length
My assumption is “this code is O(N)
The worst case is when all elements are larger than
K. In that case the while loop will increment the head and the tail successively so the number of iterations is 2∗N
Is it true? If not, what’s the complexity of this code?
Time complexity check
The best way to see if a function is indeed running in average linear time is to actually run it and look at the graph of size (N) vs time needed.
(Ask me if you are interested in the code used to generate this graph.)
There are a few outliers when the algorithm has to iterate more rather than less, but in general the trend is clearly linear.
Actual code review
As far as the code itself, you could
yield the lengths and call
max afterwards, to separate the code into finding all subarrays lengths and picking the
max one: (this way finding all sub-arrays is just
def length_subarrays_more_than_k(array, K): head, tail = 0, 0 current_sum = 0 while(tail<len(array)): if current_sum + array[tail]<=K: current_sum += array[tail] tail += 1 yield tail - head else: current_sum -= array[head] head += 1 def max_subarray_more_than_k(array, k): return max(length_subarrays_more_than_k(array, K))