Problem

https://leetcode.com/problems/last-stone-weight/

We have a collection of rocks, each rock has a positive integer

weight.Each turn, we choose the two heaviest rocks and smash them together.

Suppose the stones have weights x and y with x <= y. The result of

this smash is:If x == y, both stones are totally destroyed; If x != y, the stone of

weight x is totally destroyed, and the stone of weight y has new

weight y-x. At the end, there is at most 1 stone left. Return the

weight of this stone (or 0 if there are no stones left.)Example 1:

`Input: [2,7,4,1,8,1] Output: 1 Explanation: We combine 7 and 8 to get 1 so the array converts to [2,4,1,1,1] then, we combine 2 and 4 to get 2 so the array converts to [2,1,1,1] then, we combine 2 and 1 to get 1 so the array converts to [1,1,1] then, we combine 1 and 1 to get 0 so the array converts to [1] then that's the value of last stone.`

Please review performance.

```
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace Heap
{
/// <summary>
/// https://leetcode.com/problems/last-stone-weight/
/// </summary>
[TestClass]
public class LastStoneWeightMaxHeap
{
[TestMethod]
public void MaxHeapTest()
{
BinaryMaxHeap h = new BinaryMaxHeap(6);
h.InsertKey(2);
h.InsertKey(7);
h.InsertKey(4);
h.InsertKey(1);
h.InsertKey(8);
h.InsertKey(1);
Assert.AreEqual(8, h.ExtractMax());
Assert.AreEqual(7, h.ExtractMax());
Assert.AreEqual(4, h.ExtractMax());
Assert.AreEqual(2, h.ExtractMax());
Assert.AreEqual(1, h.ExtractMax());
Assert.AreEqual(1, h.ExtractMax());
}
[TestMethod]
public void LastStoneWeightTest()
{
int[] input = {2, 7, 4, 1, 8, 1};
Assert.AreEqual(1, LastStoneWeight(input));
}
[TestMethod]
public void LastStoneWeightCaseTest()
{
int[] input = { 2,2};
Assert.AreEqual(0, LastStoneWeight(input));
}
public int LastStoneWeight(int[] stones)
{
BinaryMaxHeap h = new BinaryMaxHeap(stones.Length);
foreach (var stone in stones)
{
h.InsertKey(stone);
}
while (h.Count > 1)
{
var stone1 = h.ExtractMax();
var stone2 = h.ExtractMax();
if (stone1 == stone2)
{
continue;
}
h.InsertKey(stone1 - stone2);
}
return h.ExtractMax();
}
}
public class BinaryMaxHeap
{
private readonly int[] _heapArr;
private readonly int _capacity;
public uint _heapSize;
public BinaryMaxHeap(int capacity)
{
_capacity = capacity;
_heapSize = 0;
_heapArr = new int[capacity];
}
public void InsertKey(int key)
{
if (_heapSize == _capacity)
{
throw new StackOverflowException("overflow can't insert key");
}
//insert the new key at the end
_heapSize++;
uint i = _heapSize - 1;
_heapArr[i] = key;
//fix the heap as max heap
// Fix the max heap property if it is violated
while (i != 0 && _heapArr[Parent(i)] < _heapArr[i]) //bubble is generic specific
{
Swap(i, Parent(i));
i = Parent(i);
}
}
public int ExtractMax()
{
if (_heapSize <= 0)
{
return 0;
}
if (_heapSize == 1)
{
_heapSize--;
return _heapArr[0];
}
// Store the minimum value, and remove it from heap
int root = _heapArr[0];
_heapArr[0] = _heapArr[_heapSize - 1];
_heapSize--;
Heapify(0);
return root;
}
private void Heapify(uint i)
{
uint l = Left(i);
uint r = Right(i);
uint largest = i;
if (l < _heapSize && _heapArr[i]< _heapArr[l])
{
largest = l;
}
if (r < _heapSize && _heapArr[largest] < _heapArr[r])
{
largest = r;
}
if (largest != i)
{
Swap(i, largest);
Heapify(largest);
}
}
private void Swap(uint key1, uint key2)
{
int temp = _heapArr[key2];
_heapArr[key2] = _heapArr[key1];
_heapArr[key1] = temp;
}
private uint Parent(uint i)
{
return (i - 1) / 2;
}
private uint Right(uint i)
{
return 2 * i + 2;
}
private uint Left(uint i)
{
return 2 * i + 1;
}
public int GetMax()
{
return _heapArr[0];
}
public uint Count
{
get { return _heapSize; }
}
}
}
```

Solution

```
``````
[TestMethod]
public void LastStoneWeightTest()
{
int[] input = {2, 7, 4, 1, 8, 1};
Assert.AreEqual(1, LastStoneWeight(input));
}
[TestMethod]
public void LastStoneWeightCaseTest()
{
int[] input = { 2,2};
Assert.AreEqual(0, LastStoneWeight(input));
}
```

These tests could be combined using `DataTestMethod`

and `DataRow`

.

```
``````
BinaryMaxHeap h = new BinaryMaxHeap(stones.Length);
foreach (var stone in stones)
{
h.InsertKey(stone);
}
```

This is inefficient. Inserting like this takes O(nlgn)

$O(n\mathrm{lg}n)$time, whereas organising a randomly ordered array so that it satisfies the heap invariant can be done in O(n)

$O(n)$time.

Note that this operation on the entire array is commonly referred to as `heapify`

, not the restoring of the invariant for a single insertion or deletion. I would rename `Heapify`

to `DownHeap`

.

```
``````
private readonly int[] _heapArr;
private readonly int _capacity;
```

What is the purpose of `_capacity`

? Would it not be better to eliminate it in favour of `_heapArr.Length`

?

```
``````
_heapSize++;
uint i = _heapSize - 1;
_heapArr[i] = key;
```

This is a matter of opinion, but I think that

```
_heapArr[_heapSize++] = key;
```

is perfectly readable, and certainly I doubt many people would complain about

```
_heapArr[_heapSize] = key;
_heapSize++;
```

```
``````
//fix the heap as max heap
// Fix the max heap property if it is violated
while (i != 0 && _heapArr[Parent(i)] < _heapArr[i]) //bubble is generic specific
{
Swap(i, Parent(i));
i = Parent(i);
}
```

I don’t think the double-comment before the loop is necessary. I don’t understand the third comment.

Why is moving down the heap pulled out as a method, but not moving up the heap?

```
``````
public int ExtractMax()
{
if (_heapSize <= 0)
{
return 0;
}
```

Not an exception?

```
``````
if (_heapSize == 1)
{
_heapSize--;
return _heapArr[0];
}
// Store the minimum value, and remove it from heap
int root = _heapArr[0];
_heapArr[0] = _heapArr[_heapSize - 1];
_heapSize--;
Heapify(0);
return root;
```

I would be inclined to combine the two cases:

```
int root = _heapArr[0];
_heapSize--;
if (_heapSize > 0)
{
_heapArr[0] = _heapArr[_heapSize];
Heapify(0);
}
return root;
```

```
``````
private void Heapify(uint i)
{
...
if (largest != i)
{
Swap(i, largest);
Heapify(largest);
}
}
```

Although .Net does support tail recursion, I believe it’s better supported in F# than C#. It might be worth rewriting this to be iterative instead of recursive.

```
``````
private uint Parent(uint i)
{
return (i - 1) / 2;
}
```

As a matter of style here I would prefer to use `=>`

, but this is very subjective.

In terms of performance, the only obvious improvement is the way that the heap is initialised, and that’s not actually affecting the overall asymptotic performance. There may be a way to do better than O(nlgn)

$O(n\mathrm{lg}n)$time overall, but it certainly isn’t easy to see. A more complicated heap might give slight improvements to the average running time (e.g. by exploiting the fact that after the initial insertions all insertions will be in descending order, so you can bucket on the most significant bit), but KISS and YAGNI.

I commend you for including the test framework. It would be good to have some more tests: e.g. in my experience it’s easy to have subtle bugs in heaps, so I now try to test them by brute force over all permutations of a small number of insertions and removals. For the `LastStoneWeight`

I would consider writing a test case which uses a seeded random number generator to produce a thousand test cases, validating against a trivial but slow implementation such as proposed in Justin’s answer.