Problem
Problem description:
Design and implement a data structure for Least Recently Used (LRU)
cache. It should support the following operations: get and put.get(key) – Get the value (will always be positive) of the key if the
key exists in the cache, otherwise return -1. put(key, value) – Set or
insert the value if the key is not already present. When the cache
reached its capacity, it should invalidate the least recently used
item before inserting a new item.Follow up: Could you do both operations in O(1) time complexity?
I passed 17/18 test cases with this and failed the last one due to this exceeding time constraints. I’m guessing something here isn’t O(1)? I’ve spent hours but can’t identify it.
class LRUCache {
Map<Integer, Integer> cache;
Queue<Integer> q;
int capacity;
public LRUCache(int capacity) {
cache = new HashMap<>();
q = new LinkedList<Integer>();
this.capacity = capacity;
}
public int get(int key) {
if (cache.get(key) == null || cache.get(key) == -1) return -1;
int value = cache.get(key);
q.remove(key);
q.add(key);
System.out.println("get() - key: " + key + " value: " + value);
return value;
}
public void put(int key, int value) {
if (cache.get(key) == null || cache.get(key) == -1) {
if (q.size() >= capacity) {
evict();
}
} else {
q.remove(key);
}
q.add(key);
cache.put(key, value);
System.out.println("put()...key: " + key + " queue size: " + q.size());
}
private void evict() {
int toRemove = q.remove();
cache.put(toRemove, -1);
System.out.println("Evict: " + toRemove + " queue size: " + q.size());
}
}
/**
* Your LRUCache object will be instantiated and called as such:
* LRUCache obj = new LRUCache(capacity);
* int param_1 = obj.get(key);
* obj.put(key,value);
*/
Solution
class LRUCache {
Map<Integer, Integer> cache;
Queue<Integer> q;
int capacity;
Is there any reason for not making these fields private
?
public int get(int key) {
if (cache.get(key) == null || cache.get(key) == -1) return -1;
Why the special case if cache.get(key) == -1
? I don’t see that in the spec.
int value = cache.get(key);
This method has now called cache.get(key)
three times. I think this could be optimised somewhat…
q.remove(key);
This is not O(1). I suspect that you will need to implement your own linked list to get O(1) removal from the middle.
System.out.println("get() - key: " + key + " value: " + value);
I would recommend removing debug printing before asking for code review.
public void put(int key, int value) {
My observations are similar to those on get
.
private void evict() {
int toRemove = q.remove();
cache.put(toRemove, -1);
Huh? This answers my earlier question about special cases, but raises a more fundamental question. Do you understand what the point of an LRU cache is? You should never have cache.size() > capacity
.