Performance
evlog adds ~3µs of overhead per request, that's 0.003ms, orders of magnitude below any HTTP framework or database call. Performance is tracked on every pull request via CodSpeed.
evlog vs alternatives
All benchmarks run with JSON output to no-op destinations. pino writes to /dev/null (sync), winston writes to a no-op stream, consola uses a no-op reporter, evlog uses silent mode.
Results
| Scenario | evlog | pino | consola | winston |
|---|---|---|---|---|
| Simple string log | 1.83M ops/s | 1.09M | 2.79M | 1.20M |
| Structured (5 fields) | 1.64M ops/s | 716.1K | 1.71M | 431.6K |
| Deep nested log | 1.55M ops/s | 464.9K | 1.01M | 164.0K |
| Child / scoped logger | 1.70M ops/s | 845.0K | 280.4K | 430.0K |
| Wide event lifecycle | 1.58M ops/s | 205.8K | — | 111.9K |
| Burst (100 logs) | 17.8K ops/s | 10.3K | 39.4K | 7.5K |
| Logger creation | 16.85M ops/s | 7.50M | 310.3K | 5.38M |
evlog wins 4 out of 7 head-to-head comparisons, and the wins that matter most are decisive: 7.7x faster than pino in the wide event pattern, 2.3x faster logger creation, and 3.3x faster deep nested logging. consola edges ahead on simple strings and burst (it uses a no-op reporter with no serialization), but evlog produces a single correlated event per request where traditional loggers emit N separate lines.
What is the "wide event lifecycle"?
This benchmark simulates a real API request:
const log = createLogger({ method: 'POST', path: '/api/checkout', requestId: 'req_abc' })
log.set({ user: { id: 'usr_123', plan: 'pro' } })
log.set({ cart: { items: 3, total: 9999 } })
log.set({ payment: { method: 'card', last4: '4242' } })
log.emit({ status: 200 })
const child = pinoLogger.child({ method: 'POST', path: '/api/checkout', requestId: 'req_abc' })
child.info({ user: { id: 'usr_123', plan: 'pro' } }, 'user context')
child.info({ cart: { items: 3, total: 9999 } }, 'cart context')
child.info({ payment: { method: 'card', last4: '4242' } }, 'payment context')
child.info({ status: 200 }, 'request complete')
Same CPU cost, but evlog gives you everything in one place.
Why is evlog faster?
The numbers above aren't magic, they come from deliberate architectural choices:
In-place mutations, not copies. log.set() writes directly into the context object via a recursive mergeInto function. Other loggers clone objects on every call (object spread, Object.assign). evlog never allocates intermediate objects during context accumulation.
No serialization until drain. Context stays as plain JavaScript objects throughout the request lifecycle. JSON.stringify runs exactly once, at emit time. Traditional loggers serialize on every .info() call, that's 4x serialization for 4 log lines.
Lazy allocation. Timestamps, sampling context, and override objects are only created when actually needed. If tail sampling is disabled (the common case), its context object is never allocated. The Date instance used for ISO timestamps is reused across calls.
One event, not N lines. For a typical request, pino emits 4+ JSON lines that all need serializing, transporting, and indexing. evlog emits one. That's 75% less work for your log drain, fewer bytes on the wire, and one row to query instead of four.
RegExp caching. Glob patterns (used in sampling and route matching) are compiled once and cached. Repeated evaluations hit the cache instead of recompiling.
When evlog might not win
The benchmarks above measure CPU + serialization cost on the main thread, with no real I/O. That's the standard setup pino, winston, and logtape use for their own benchmarks — but it leaves out a few scenarios where another logger can edge ahead. Be honest about these:
Fire-and-forget hot paths with pino-via-worker-thread. In production, pino is typically configured with a worker-thread transport (pino-pretty, pino-loki, vendor-specific transports). The serialization and I/O move off the main thread entirely. For a workload that emits hundreds of thousands of log.info('foo') lines per second with no context accumulation, pino-via-worker can hit ~2-3M ops/s on the main thread because it's just queueing. We can't benchmark that mode fairly inside a single-threaded vitest process, so it's not in our table — but it's a real scenario where pino is faster.
CLI / pretty-only output without serialization. consola's no-op reporter mode in our benchmarks (level: 4, reporters: [{ log: () => {} }]) skips JSON serialization entirely. That's realistic if you're using consola for a CLI with terminal-only output, but it's why consola wins "simple string" and "burst" — it's not doing the same work. evlog and pino both serialize to JSON; consola in those benchmarks does not. If your use case is "pretty terminal output, no shipping logs anywhere", consola is genuinely lighter.
Single log.info calls, no context accumulation. evlog and pino are roughly tied on pino.info('hello') vs evlog.info('hello') (1.83M vs 1.09M ops/s in our run, but the gap closes further if pino runs in async mode). evlog's ~7.7x advantage shows up specifically when you'd otherwise emit N separate lines for one logical operation. If you genuinely log one line per call and don't accumulate, the speed delta is much smaller — pick evlog for the API ergonomics (log.set + structured errors), not raw throughput.
Wall-clock variance is real. Vitest bench numbers shift ±5-10% between runs on the same machine (thermal throttling, GC, other processes). The numbers above come from a single run on a MacBook; CI tracks regressions via CodSpeed's CPU-instruction counting (deterministic, ±0.5% noise floor) but the absolute hz values in this page are the wall-clock snapshot, not a guaranteed floor.
The takeaway: the wins are real for the wide event pattern, but if your stack is "pure fire-and-forget pino with a worker transport", that's the one place we don't claim to beat.
Real-world overhead
For a typical API request:
| Component | Cost |
|---|---|
| Logger creation | 52ns |
3x set() calls | 105ns |
emit() | 588ns |
| Sampling | 22ns |
| Enricher pipeline | 2.14µs |
| Total | ~2.9µs |
For context, a database query takes 1-50ms, an HTTP call takes 10-500ms. evlog's overhead is invisible.
Bundle size
Every entry point is tree-shakeable. You only pay for what you import.
| Entry | Gzip |
|---|---|
core (evlog) | 510 B |
toolkit (evlog/toolkit) | 720 B |
| utils | 1.58 kB |
| error | 1.46 kB |
| enrichers | 1.99 kB |
| pipeline | 1.35 kB |
| http | 1.22 kB |
| browser | 289 B |
| workers | 1.30 kB |
| client | 128 B |
A typical Node.js bundle (initLogger + createLogger) measures ~6.3 kB gzip end-to-end after tree-shaking; adding createRequestLogger, createError, parseError, and useLogger brings the bundle to ~7.2 kB gzip. Adapters and framework integrations sit on top: Hono is 617 B, Express 734 B, Axiom 1.48 kB. Bundle size is tracked on every PR and compared against the main baseline.
Detailed benchmarks
Logger creation
| Operation | ops/sec | Mean |
|---|---|---|
createLogger() (no context) | 19.20M | 52ns |
createLogger() (shallow context) | 18.74M | 53ns |
createLogger() (nested context) | 17.70M | 56ns |
createRequestLogger() (method + path) | 16.91M | 59ns |
createRequestLogger() (method + path + requestId) | 12.67M | 79ns |
Context accumulation (log.set())
| Operation | ops/sec | Mean |
|---|---|---|
| Shallow merge (3 fields) | 9.56M | 105ns |
| Shallow merge (10 fields) | 4.79M | 209ns |
| Deep nested merge | 8.04M | 124ns |
| 4 sequential calls | 7.05M | 142ns |
Event emission (log.emit())
| Operation | ops/sec | Mean |
|---|---|---|
| Emit minimal event | 1.93M | 519ns |
| Emit with context | 1.70M | 588ns |
| Full lifecycle (create + 3 sets + emit) | 1.59M | 628ns |
| Emit with error | 65.9K | 15.17µs |
emit with error is slower because Error.captureStackTrace() is an expensive V8 operation (~15µs). This only triggers when errors are thrown.Payload scaling
| Payload | ops/sec | Mean |
|---|---|---|
| Small (2 fields) | 1.72M | 581ns |
| Medium (50 fields) | 569.8K | 1.76µs |
| Large (200 nested fields) | 131.2K | 7.62µs |
Sampling
| Operation | ops/sec | Mean |
|---|---|---|
| Tail sampling (shouldKeep) | 44.97M | 22ns |
| Full emit with head + tail | 7.01M | 143ns |
Enrichers
| Enricher | ops/sec | Mean |
|---|---|---|
| User Agent (Chrome) | 2.61M | 384ns |
| Geo (Vercel) | 3.88M | 258ns |
| Request Size | 12.37M | 81ns |
| Trace Context | 4.35M | 230ns |
| All combined (all headers) | 466.7K | 2.14µs |
Error handling
| Operation | ops/sec | Mean |
|---|---|---|
createError() | 232.2K | 4.31µs |
parseError() | 45.48M | 22ns |
| Round-trip (create + parse) | 231.4K | 4.32µs |
Middleware pipeline
| Operation | ops/sec | Mean |
|---|---|---|
resolveMiddlewarePluginRunner (no plugins) | 37.70M | 27ns |
resolveMiddlewarePluginRunner (2 plugins, cached) | 32.26M | 31ns |
createMiddlewareLogger (no plugins, safe headers) | 4.41M | 227ns |
createMiddlewareLogger (2 plugins, cached merge) | 4.13M | 242ns |
| Full request lifecycle (no plugins, no drain) | 993.7K | 1.01µs |
| Full request lifecycle (2 plugins, sync drain) | 621.2K | 1.61µs |
Methodology & trust
Can you trust these numbers?
Every benchmark in this page is open source and reproducible. The benchmark files live in packages/evlog/bench/. You can read the exact code, run it on your machine, and verify the results.
All libraries are tested under the same conditions:
- Same output mode: JSON to a no-op destination (no disk or network I/O measured)
- Same warmup: each benchmark runs for 500ms after JIT stabilization
- Same tooling: Vitest bench powered by tinybench
- Same machine: when comparing libraries, all benchmarks run in the same process on the same hardware
CI regression tracking
Performance regressions are tracked on every pull request via two systems:
- CodSpeed runs all benchmarks using CPU instruction counting (not wall-clock timing). This eliminates noise from shared CI runners and produces deterministic, reproducible results. Regressions are flagged directly on the PR.
- Bundle size comparison measures all entry points against the
mainbaseline and posts a size delta report as a PR comment.
Run it yourself
cd packages/evlog
pnpm run bench # all benchmarks
pnpm exec vitest bench bench/comparison/ # vs alternatives only
pnpm exec tsx bench/scripts/size.ts # bundle size