Back to Blog
LeetCodeMar 2026 · 8 min read

LeetCode #146 — LRU Cache: Every Way to Solve It

GoHash MapLinked List

TL;DR

LRU Cache in O(1) get/put requires a hashmap for key lookup and a doubly-linked list for recency ordering. The hashmap stores node pointers so you can jump directly to any node and reorder it without scanning.

From brute force to the optimal O(1) doubly linked list + hashmap solution, with full Go implementation and complexity analysis.In this post, I'll walk through the key concepts with code examples drawn from real production implementations.

The Problem

Design a data structure with O(1) get and put operations that evicts the least recently used item when capacity is exceeded. This is LC #146 — a classic that tests your ability to combine data structures.

Brute Force: O(N) with Ordered Map

The naive approach is a map from key to value, plus an ordered list tracking recency. On get, move the accessed key to the front of the list. On eviction, remove from the back. The problem: finding the key in the list to move it is O(N).

Optimal Solution: HashMap + Doubly Linked List

The key insight: if the hashmap stores a pointer to the node in the linked list (not just the value), we can find any node in O(1) and then reorder it in O(1) because doubly linked list insertions/deletions are O(1) given a pointer to the node.

lru_cache.go
type Node struct {    key, val   int    prev, next *Node}type LRUCache struct {    cap        int    cache      map[int]*Node    head, tail *Node  // sentinel nodes}func Constructor(capacity int) LRUCache {    head := &Node{}    tail := &Node{}    head.next = tail    tail.prev = head    return LRUCache{        cap:   capacity,        cache: make(map[int]*Node),        head:  head,        tail:  tail,    }}func (c *LRUCache) remove(node *Node) {    node.prev.next = node.next    node.next.prev = node.prev}func (c *LRUCache) insertFront(node *Node) {    node.next = c.head.next    node.prev = c.head    c.head.next.prev = node    c.head.next = node}func (c *LRUCache) Get(key int) int {    if node, ok := c.cache[key]; ok {        c.remove(node)        c.insertFront(node)        return node.val    }    return -1}func (c *LRUCache) Put(key int, value int) {    if node, ok := c.cache[key]; ok {        node.val = value        c.remove(node)        c.insertFront(node)        return    }    node := &Node{key: key, val: value}    c.cache[key] = node    c.insertFront(node)    if len(c.cache) > c.cap {        lru := c.tail.prev        c.remove(lru)        delete(c.cache, lru.key)    }}

Why Sentinel Nodes?

The head and tail sentinel nodes eliminate edge case checks. Without them, inserting to an empty list or removing the only node requires special handling. With sentinels, head.next is always the most recent node and tail.prev is always the LRU — even when the cache is empty.

Complexity Analysis

OperationTimeSpace
GetO(1)
PutO(1)
Overall spaceO(capacity)

Common Mistakes

  • Forgetting to remove the node before re-inserting it at the front on Get — creates a circular reference.
  • On eviction, forgetting to delete the node from the hashmap — causes the map to grow unboundedly.
  • Using the wrong end for LRU: the most-recently-used node should be at head.next; the LRU is at tail.prev.
  • Not handling the case where Put updates an existing key — must move to front, not insert a duplicate.

Go Note

Go's container/list package provides a doubly linked list, but implementing it from scratch in an interview is better for understanding. container/list nodes require type assertions that slow you down when under pressure.