Kelly Mears
·4 min read

The Signal in a Reducer

Array.prototype.reduce has a discourse problem. Some people use it for everything. Others think you should never use it at all. Both camps treat it as a matter of style. It isn't. It's a signal.

The type-level signal

The three core array methods:

  • map: Array<T>Array<U>. New values, same shape.
  • filter: Array<T>Array<T>. Fewer elements, same type.
  • reduce: Array<T>U. Everything can change.

When I see .reduce(), it tells me something a for loop doesn't: a transformation between types is happening. An array is becoming a number, a lookup table, a tree. Same reason we use map instead of a for loop that pushes into an array — the method communicates intent at the expression level, not the statement level.

Mutate vs. produce

A colleague of mine uses let instead of const when declaring an object he intends to mutate in scope. The language doesn't require it — but it's a consistent signal: this data is going to change.

A reducer works the same way but at the expression level. Where let obj = {} followed by a loop says "I'm going to mutate something," a reduce says "I'm going to produce something."

The accumulator

The imperative version always leaks a mutable binding:

const users = [
  { role: 'admin', name: 'Alice' },
  { role: 'editor', name: 'Bob' },
  { role: 'admin', name: 'Carol' },
  { role: 'editor', name: 'Dave' },
]

let grouped: Record<string, string[]> = {}
for (const user of users) {
  if (!grouped[user.role]) {
    grouped[user.role] = []
  }
  grouped[user.role].push(user.name)
}

grouped starts empty and gets mutated across iterations. Every line of the loop body can read and write to it. If this loop were longer, you'd need to trace mutations to understand the final shape.

Compare:

const grouped = users.reduce<Record<string, string[]>>(
  (acc, user) => ({
    ...acc,
    [user.role]: [...(acc[user.role] ?? []), user.name],
  }),
  {}
)

grouped is const. Assigned once, to the result of the transformation. The accumulator is scoped entirely within the callback. Nothing to trace.

Pipelines

Reduce is a natural terminal operation in a chain:

const data = [1, 2, 3, 4, 5]
const result = data
  .filter((n) => n % 2 === 0)
  .map((n) => n * 2)
  .reduce((acc, n) => acc + n, 0)

Each line does one thing. The for loop equivalent tangles filtering, mapping, and accumulating into a single block — you have to read the whole body to understand what's happening.

const data = [1, 2, 3, 4, 5]
let result = 0
for (const n of data) {
  if (n % 2 === 0) {
    result += n * 2
  }
}

Step by step:

The readability cliff

Reduce has one, and it's easy to fall off. Nested reduces are almost always wrong. If the callback exceeds three or four lines, extract it. If the accumulator type isn't obvious from the seed value, annotate it — TypeScript's inference on reduce is unreliable.

Here's a reduce doing too much:

// Don't do this
const result = data.reduce((acc, item) => {
  const key = item.category.toLowerCase().trim()
  const existing = acc.get(key)
  if (existing) {
    existing.count += 1
    existing.total += item.value
    existing.avg = existing.total / existing.count
  } else {
    acc.set(key, { count: 1, total: item.value, avg: item.value })
  }
  return acc
}, new Map())

Key normalization, arithmetic, and complex accumulator management — all in one callback. The reduce isn't adding clarity here; it's hiding complexity. A for...of with named helpers is clearer:

const result = new Map<string, CategoryStats>()

for (const item of data) {
  const key = normalizeKey(item.category)
  const stats = result.get(key) ?? createStats()
  result.set(key, updateStats(stats, item.value))
}

If you can't read the reducer callback in a single glance, it's not earning its keep.


Use reduce when it communicates something a loop can't: a new shape is being built from an old one. When it does that clearly, it's worth it. When it doesn't, use a loop.