Problem

## Problem statement

The algorithm is similar to Leetcode 18 4Sum, the algorithm is to find one unique quadruplets compared to Leetcode 18 finding all unique quadruplets.

Given an array S of n integers, are there elements a, b, c and d in S such that a + b + c + d = target? Find one unique quadruplets in the array which givens the sum of target, and also the quadruplets is in ascending order.

For example, given an input array with the elements of [1, 6, 3, 8, 4, 0, 2], given 4 sum value 14, the ordered quadruplet with 14 in an ascending order can be [0, 2, 4, 8].

There are two other quadruplets with sum 14 as well, one is [1, 2, 3, 8] and the other one is [1, 3, 4, 6]. The algorithm only requires to return just one quadruplet in an ascending order.

## My mock interview practice

I practiced this algorithm more than four times, but every time I practiced the algorithm, I had different challenges from the peer. First time I was told that I should write the optimized algorithm, the optimized time should be less than O(n^{3}) and it can be implemented using hashmap to store all possible two sums first. The third time I could not complete the algorithm in 30 minutes using the idea to preprocess all possible two sum first, and the peer gave the review “**The code didn’t run through, so maybe you should work on a working solution first**” . So after the third mock interview, I decided only to find one of those four quadruplets with strictly ascending order.

## Highlights of 4th mock practice

I did manage to write the algorithm and used whiteboard testing to go over a simple test case, and passed all test cases in 30 minutes. To structure the interview, I start to practice whiteboard testing right away after I finish the writing, what I do is to put numbers associated with the simplest test case next to almost each line of code. I found a bug in mock interview whiteboard testing, and quickly appended extra two elements in the array so I could compare the quadruplets’ indexes. The success of the 4th mock interview depended on the whiteboard testing, the code passed all test cases.

## The art to choose a test case

I chose the simple test case `[3, 2, 1, 4, 5]`

and the given `4`

sum is `12 = 1 + 2 + 4 + 5`

. Compared to the test case given in the problem statement, this one is easy to follow.

Through whiteboard testing, I found a bug in my design. The array may have duplicate numbers, and then I spent extra 5 minutes to change the design, two extra elements in the array are added to save index number of the array. For illustration, here is the statement in the code: `newList.Add(new int[]{no1, no2, i, j})`

.

In order to avoid complicate logic checking, I enforced the extra rule for the quadruplets, four variables are named for easy to write, called `no1`

, `no2`

, `no3`

, `no4`

. Those variables are the values of four elements in a sorted ascending array, assuming that `no1`

<= `no2`

<= `no3`

<= `no4`

.

## Whiteboard testing is savior

Whiteboard testing is the tool in mock interview to show how I am responsible for my own code, be critical thinker. Being a responsible person, make a plan, and follow through the plan, that makes a person success. It does matter for the algorithm and data structure mock interview practice as well, follow through with a whiteboard testing to review and think one more time.

Recently, the peer asked me not to do whiteboard testing during the mock interview, because the peer was happy about my coding. After the mock interview I spent over one hour to using Visual Studio and tried to find one simple mistake to mix two variable names, actually the bug can easily be found in less than five minutes through the whiteboard testing. Through the instance I learn that the whiteboard testing is such a simple habit to build and a savior for me as a mediocre programmer. It should have saved me hundreds of hours through the career if I build the habit the first day as a programmer.

## k-SUM algorithm optimal solution study

It takes time to learn k-sum algorithm. One thing I like to do is to understand the generalized algorithm k-Sum, I wish that I could understand those mathematical analysis for time complexity. k-SUM algorithm can be handled differently for even k and for odd. For even `k`

: Compute a sorted list `S`

of all sums of `k/2`

input elements. Check whether `S`

contains both some number `x`

and its negation `-x`

. The algorithm runs in O(n^{k/2}log(n)) time. The link of answer for generalized k-sum algorithm is here.

Based on the above analysis, to optimize the time complexity, it is better to implement using C# SortedSet to store the list of two sums with the same value compared to `IList`

, so the search can be lowered to log(n) time instead of n whereas n is the size of those two sums with the given value.

The following is my C# practice code with comments to show whiteboard testing on a simple test case `[3, 2, 1, 4, 5]`

with given sum `12`

. Please help me review my code.

```
using System;
using System.Collections.Generic;
class Solution
{
public static int[] FindArrayQuadruplet(int[] arr, int s)
{
if (arr == null || arr.Length < 4)
{
return new int[0];
}
Array.Sort(arr);
var dictionary = saveTwoSumToDictionary(arr);
int length = arr.Length;
for (int first = 0; first < length - 3; first++)
{
for (int second = first + 1; second < length - 2; second++)
{
var firstTwoSum = arr[first] + arr[second];
var no1 = arr[first];
var no2 = arr[second];
var search = s - firstTwoSum;
if (!dictionary.ContainsKey(search))
{
continue;
}
var options = dictionary[search];
foreach (int[] pair in options)
{
var no3 = arr[pair[0]];
var no4 = arr[pair[1]];
var index3 = pair[0];
var unique = second < index3;
if (unique)
{
return new int[] { no1, no2, no3, no4 };
}
}
}
}
return new int[0];
}
private static IDictionary<int, IList<int[]>> saveTwoSumToDictionary(int[] arr)
{
var twoSum = new Dictionary<int, IList<int[]>>();
int length = arr.Length;
for (int i = 0; i < length - 1; i++)
{
for (int j = i + 1; j < length; j++)
{
var no1 = arr[i];
var no2 = arr[j];
var sum = no1 + no2;
var thePair = new int[] {i,j};
if(!twoSum.ContainsKey(sum))
{
var newList = new List<int[]>();
twoSum.Add(sum, newList);
}
twoSum[sum].Add(thePair);
}
}
return twoSum;
}
}
```

Solution

I came cross the better idea to solve the algorithm array quadruplet through April 28 10:00 PM mock interview. I like to share better idea to write the algorithm with more elegant code and also address the issues in the above implementation.

I continued to learn the algorithm array quardruplet through mock interview practice. I had practiced over 10 rounds of mock interview last 12 months, and every round I had to work on the algorithm at least two times.

I have worked on the algorithm over 20 times, but I could not come out better idea for my own question here last 5 months. But on April 28, 2018 I had five mock interviews a day. At 12:00 AM, I had chance to interview a peer using array quadruplet again. I found out that the peer had hard time to write a brute force solution. Later in the day my fifth interview, starting from 10:00 PM, the peer finished his algorithm in less than 9 minutes, I finished my algorithm in less than 20 minutes. So I asked the peer a few questions to test his strength on algorithm.

The peer has ICPC contest experience with medals in high school, a senior in the university, and then I asked his solution of array quadruplet since I tutored the peer the algorithm for a brute force solution early in the morning, I like to learn from others on this algorithm as well. He wrote the pseudo code first, and I did not understand his code. So I asked him to explain the algorithm with the example. He gave me some explanation with an example.

Here is the peer’a analysis using C++:

**unordered_max ‹ int, pair ‹ int, int› › allSums;**

**i:**

**allSums will contain arr[j] + arr[k] for all (j < i && j < k < i )**

```
HashMap.count(x) -> 0 if x is absent
non-zero otherwise
for (int thirdId = 0; thirdId < n; thirdId++) {
for (int fourthId = thirdId + 1; fourthId < n; fourthId++) {
int currSum = arr[thirdId] + arr[fouthId];
if (allSums.count(sum - currSum)) {
vector <int> ansIds(4);
ansIds[0] = allSums[sum - currSum].first; // ? key value - related to index third
ansIds[1] = allSums[sum - currSum].second;
ansIds[2] = thirdId;
ansIds[3] = fourthId;
return ansIds;
}
}
for (int firstId = 0; firstId < thirdId; firstId++) {
allSums[firstId + thirdId] = make_pair(firstId, thirdId);
}
}
```

I asked the peer how allSums is defined, why it is defined that way?

Here is his analysis:

thirdId = 0

allSums is empty

thirdId = 1

allSums is empty

thirdId = 2

allSums contain sum arr[0] + arr[1]

thirdId = 3

allSums contain arr[0] + arr[1], arr[0] + arr[2], arr[1] + arr[2]

thirdId = 4

I asked if the peer came cross this problem before. He said that he did.

I think that it is common pattern to build a hashmap based on visited elements in the array when iterating the array. The way to process all elements in the array to two sum hashmap is not smart, causing extra work to check **duplicate** etc. To reduce time complexity, for each sum **only one pair of indexes** need to be saved. In other words, there is no need to save all pairs with the same two sum value.

Here is my C# code I wrote based on the discussion in the mock interview.

```
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
class Solution
{
public static void Main(string[] args)
{
var arrayQuadruplet = FindArrayQuadruplet(new int[] { 3, 2, 1, 4, 6 }, 12);
Debug.Assert(arrayQuadruplet.Sum() == 12 );
}
public static int[] FindArrayQuadruplet(int[] numbers, int givenTarget)
{
if (numbers == null || numbers.Length < 4)
{
return new int[0];
}
Array.Sort(numbers);
var firstTwoNumbersSums = new Dictionary<int, int[]>();
int length = numbers.Length;
for (int third = 0; third < length - 1; third++)
{
var thirdNumber = numbers[third];
for (int fourth = third + 1; fourth < length; fourth++)
{
var fourthNumber = numbers[fourth];
var lastTwoSum = thirdNumber + fourthNumber;
var search = givenTarget - lastTwoSum;
if (!firstTwoNumbersSums.ContainsKey(search))
{
continue;
}
var firstTwo = firstTwoNumbersSums[search];
return new int[] { numbers[firstTwo[0]], numbers[firstTwo[1]], thirdNumber, fourthNumber };
}
// It is time to add visited element into two sum dictionary.
// Argue that no need to add any two indexes smaller than third, why?
for (int firstId = 0; firstId < third; firstId++)
{
var firstNumber = numbers[firstId];
var firstTwoSum = firstNumber + thirdNumber;
if (!firstTwoNumbersSums.ContainsKey(firstTwoSum))
{
firstTwoNumbersSums.Add(firstTwoSum, new int[]{firstId, third});
}
}
}
return new int[0];
}
}
```