How Bun Beats Node.js in File Streaming Performance: The Power of a Shared 250 KB Buffer

How Bun Beats Node.js in File Streaming Performance: The Power of a Shared 250 KB Buffer
Photo by Nicholas Green / Unsplash

When it comes to raw performance, especially in serving static files like package.json, Bun has been making waves with numbers that seem almost too good to be true. A recent benchmark shows:

  • Bun v1.2.16: 219,714 requests per second
  • Node.js v24.1.0: 39,178 requests per second

That’s over 5x faster. But what’s driving this massive difference?

One of the lesser-known but critical optimizations in Bun is this:

"All streaming file reads use the same single 250 KB buffer, so memory usage is O(1) instead of O(N). Reading more files won’t use more memory."

Let’s unpack this statement, why it matters, and how it affects real-world performance.


O(1) Memory Usage: Why It's a Game-Changer

In Bun’s design, there is only one 250 KB buffer allocated for reading files from disk. Regardless of how many files are being streamed concurrently, they all reuse this same buffer.

This results in constant memory usage, or O(1) memory complexity. It doesn't matter whether you're serving 10 files or 10,000 — memory use stays flat, predictable, and efficient.

Contrast this with Node.js, where each file stream typically creates its own buffer. That means:

  • Serving more files = allocating more memory
  • Memory usage scales linearly (O(N)) with the number of concurrent file reads
  • Under heavy load, it can trigger more garbage collection (GC), cache misses, and even memory exhaustion

Does the Fixed Buffer Limit Performance?

A fair question is whether using a fixed 250 KB buffer limits the size of files you can serve.

Short answer: Not really.

The buffer size only controls how much data is read from disk in one I/O operation. Even if you’re serving a large file (say, 10 MB), Bun will read it in 250 KB chunks, reusing the same buffer for each chunk.

While this may involve more read iterations compared to a runtime like Node.js (which might allocate a larger per-stream buffer), the performance trade-off is minimal, especially when the benefit is:

  • Drastically reduced memory pressure
  • Less GC activity
  • More stable behavior under high concurrency

So unless you're serving gigabyte-scale files at very low concurrency, Bun's approach actually performs better overall.


Why Node.js Uses Per-Stream Buffers

Node.js allocates a new buffer for each stream, often 64 KB or more. This design:

  • Can improve throughput when reading a single large file
  • Reduces the number of system read calls

However, this comes at a cost:

  • Every new stream adds memory usage
  • If you’re serving hundreds or thousands of files concurrently, it scales poorly
  • Unbounded buffer growth = higher memory usage = more GC pauses

In contrast, Bun trades a tiny bit of per-stream optimization for massive overall system-level efficiency.


Real-World Analogy

Imagine a print shop with 1,000 print jobs:

  • Node.js gives every job its own paper tray — fast for one job, but clutters the room quickly.
  • Bun shares one ultra-fast tray for all jobs. You get slightly more coordination, but far less waste.

Other Considerations

Here are a few more points to think about:

  • I/O coordination in Bun is likely more sophisticated. Reusing a buffer requires tight management of I/O pointers and concurrency — which Bun seems to handle elegantly.
  • CPU cache locality improves in Bun’s model because memory isn't fragmented across many buffers.
  • This optimization is only one of many — Bun’s compiler, JIT engine, and I/O stack all contribute to performance. But the buffer reuse is one of the most elegant and impactful tricks.

Conclusion: Bounded Memory, Unbounded Performance

Bun’s decision to use a single 250 KB buffer for streaming file reads is a textbook case of smart trade-off engineering. By controlling memory usage up front, it avoids the unpredictable behavior that comes with scaling file servers in environments like Node.js.

So while Node.js may offer more raw flexibility in memory usage, Bun wins in the real-world scenario where scalability, predictability, and concurrency matter.

If you're building a high-performance web server or API that needs to serve static files under load, Bun’s buffer-sharing design is one of the reasons it's not just faster — it's smarter.

Support Us