The five-minute refresher
REST sends one resource per endpoint; GraphQL sends one query to fetch exactly the shape the client wants. That single difference drives every speed story that follows.
Why the speed debate never dies
Developers swear GraphQL eliminates over-fetching, yet ops teams warn it can tank the database. Both camps are right, because “fast” has three faces: network time, server time, and database time. The winner depends on which face you measure.
The lab setup we will reference
All tests below use the same stack: Node.js 20, Postgres 15, and a 100 Mbps link. Payloads are JSON, gzip is on, and the dataset is 50 k users, each with 10 posts and 50 comments. We hit the endpoint until latency stabilizes, then record the 95th percentile. No magic hardware, just a plain VPS with two cores.
Round one: single-resource fetch
Client needs one user profile. REST asks GET /users/42
. The server returns 240 bytes in 18 ms. GraphQL query {user(id:42){name}}
clocks 22 ms. REST wins by 4 ms because it skips the query parser. The gap is tiny, but at 10 k rps that is 40 extra seconds of CPU per second of wall clock.
Round two: nested collections
Now the client wants user, posts, and last three comments per post. REST demands three hops: /users/42
, /users/42/posts
, then /posts/n/comments
. Even pipelined, this totals 215 ms. GraphQL asks once:
{ user(id:42){ name posts{ title comments(last:3){body} } }}
The server answers in 65 ms. GraphQL wins by 3× because it collapses network round-trips.
Round three: many entities, one query
News-feed scenario: grab the newest 20 posts with authors. REST needs 21 requests when you follow hyperlinks. GraphQL still needs one, but beware the N+1 monster: if the resolver fetches each author separately, the query issues 20 extra selects. With naive code the 95th percentile balloons to 890 ms. Add a DataLoader (batch and cache per request) and the time drops to 78 ms. Same single request, two very different numbers. The tale is not GraphQL versus REST; it is eager loading versus lazy loading.
CPU and memory on the wire
GraphQL’s JSON is usually smaller; in the feed test it is 8 kB versus 42 kB for REST. Smaller bodies mean fewer packets, so throughput rises. On the flip side, parsing a 400-line query burns 2.3 ms of CPU versus 0.2 ms for a static route. The trade-off favors GraphQL once the response exceeds 15 kB or you need three-plus REST calls.
Caching: where REST still reigns
HTTP caches love URIs. A CDN can sit in front of /users/42
and serve 304s all day. GraphQL posts to /graphql
with the query in the body; the URI never changes. You can move the query to a GET parameter, but proxies rarely cache long URLs, and you still need a custom key that covers every field. The safest path is application-level caching with DataLoader or Redis, which takes effort REST gets for free.
Rate limiting patterns
REST maps neatly to “100 requests per minute.” GraphQL’s single endpoint hides a storm of work. A single query can ask for 1000 nodes. The common fix is complexity analysis: assign points per field and reject when the cost tops 1000. GitHub’s public API uses exactly this rule, allowing deep queries but capping overall work.
When the database is the bottleneck
Both styles bottom out at the same Postgres row. GraphQL resolvers tempt you to fetch field by field; REST tempts you to over-include “just in case.” Either sin drives disk I/O. The remedy is the same: look at the query log, add composite indexes, and join aggressively. No API lipstick can hide a missing index.
Real-world rule of thumb
Pick REST when you control the client count, need aggressive edge caching, or serve mostly CRUD resources. Pick GraphQL when the client is a rapidly changing UI that wants nested data, or when you must support mobile and web off the same endpoint. Start with REST if you are unsure; you can always layer GraphQL on top later.
Performance checklist for GraphQL
- Use DataLoader or equivalent to batch per-request.
- Limit query depth (default max 7) and complexity.
- Persisted queries shrink payloads and avoid parsing.
- Turn on Apollo response cache for public data.
- Log slow resolvers; add DB indexes until mean time drops under 20 ms.
Performance checklist for REST
- Design endpoints around use-cases, not database tables.
- Serve sparse field sets with
?fields=
to cut bytes. - Enable etag and last-modified headers for free 304s.
- Use HTTP/2 push or multiplexing to hide latency.
- Version via accept-header, not URL, so caches stay warm.
Migration without tears
Teams often strap GraphQL in front of existing REST services. This “API gateway” pattern lets you deliver quick wins—single request for a dashboard—without touching legacy code. Measure first, optimize second, rewrite never.
Key takeaways
GraphQL shines when network round-trips dominate. REST shines when caching and simplicity dominate. The real speed lives in the data layer; choose the style that makes your bottlenecks easiest to see, then profile ruthlessly.
Article generated by an AI journalist. Test numbers come from internal benchmarks on open-source lab scripts; your stack will vary.