How to Implement Cursor-Based Pagination in a REST API with Node.js

How to Implement Cursor-Based Pagination in a REST API with Node.js

by | Apr 6, 2026 | Uncategorized | 0 comments

Why Cursor-Based Pagination Matters for Your REST API

If you have ever built a REST API that returns large lists of data, you have almost certainly dealt with pagination. The most common approach, offset-based pagination, works fine for small datasets. But once your tables grow into millions of rows, offset pagination starts to show serious cracks: skipped records, duplicated results, and queries that get slower the deeper a user pages.

Cursor-based pagination (sometimes called keyset pagination) solves these problems by using a pointer to a specific item in the dataset rather than a numeric offset. On each request, the client sends back the cursor it received, and the server returns the next batch of results starting right after that pointer.

In this tutorial, you will learn how to implement cursor-based pagination in a REST API built with Node.js. We will cover both SQL (PostgreSQL) and NoSQL (MongoDB) examples, show you how to encode and decode cursors, and walk through the edge cases that trip most developers up.

Offset Pagination vs. Cursor-Based Pagination

Before diving into code, let’s make sure the difference is crystal clear.

Feature Offset Pagination Cursor-Based Pagination
Request parameters ?page=5&limit=20 or ?offset=80&limit=20 ?cursor=abc123&limit=20
Performance at scale Degrades as offset grows (DB must scan and discard rows) Constant time, regardless of how deep the page is
Consistency Insertions or deletions between requests cause skipped or duplicated items Stable, because the cursor points to a specific record
Random page access Easy (jump to page N) Not natively supported (sequential traversal)
Implementation complexity Simple Moderate

The takeaway: if your API consumers need to jump to page 47 directly, offset might still be the pragmatic choice. For everything else, especially feeds, infinite scrolling, and large-scale data exports, cursor-based pagination in a REST API is the better path.

How Cursor-Based Pagination Works

The concept is straightforward:

  1. The client makes an initial request without a cursor: GET /api/posts?limit=20.
  2. The server returns 20 items plus a nextCursor value that uniquely identifies the last item returned.
  3. The client makes the next request with that cursor: GET /api/posts?limit=20&cursor=eyJpZCI6MTAwfQ.
  4. The server decodes the cursor, queries for records after that pointer, and returns the next 20 items with a new nextCursor.
  5. When there are no more results, the server returns an empty list and no nextCursor (or sets it to null).

What makes a good cursor?

A cursor should be:

  • Opaque to the client. The client should never need to parse or construct it manually.
  • Based on a unique, sequential column such as a primary key, a timestamp, or a combination of both.
  • URL-safe. Base64 encoding is the most common way to achieve this.

Project Setup

We will use a minimal Express.js application. The examples work with Node.js 20+ and either PostgreSQL (via pg) or MongoDB (via the native driver or Mongoose). Initialize your project like this:

mkdir cursor-pagination-demo
cd cursor-pagination-demo
npm init -y
npm install express
# For PostgreSQL:
npm install pg
# OR for MongoDB:
npm install mongoose

Implementing Cursor-Based Pagination with PostgreSQL

Step 1: Create the helper functions

We need two small utility functions to encode and decode our cursor. The cursor will be a Base64-encoded JSON string containing the values we paginate on.

// cursor.js
function encodeCursor(payload) {
  return Buffer.from(JSON.stringify(payload)).toString('base64url');
}

function decodeCursor(cursorString) {
  try {
    return JSON.parse(Buffer.from(cursorString, 'base64url').toString('utf8'));
  } catch {
    return null;
  }
}

module.exports = { encodeCursor, decodeCursor };

Using base64url instead of plain base64 ensures the cursor is safe for query strings without additional encoding.

Step 2: Write the paginated query

Assume we have a posts table with columns id (serial primary key), title, created_at, and we want to paginate by id in ascending order.

// routes/posts.js
const express = require('express');
const router = express.Router();
const pool = require('../db'); // your pg Pool instance
const { encodeCursor, decodeCursor } = require('../cursor');

const DEFAULT_LIMIT = 20;
const MAX_LIMIT = 100;

router.get('/posts', async (req, res) => {
  let limit = parseInt(req.query.limit, 10) || DEFAULT_LIMIT;
  limit = Math.min(limit, MAX_LIMIT);

  const cursor = req.query.cursor ? decodeCursor(req.query.cursor) : null;

  let query;
  let params;

  if (cursor && cursor.id) {
    // Fetch rows where id is greater than the cursor id
    query = 'SELECT id, title, created_at FROM posts WHERE id > $1 ORDER BY id ASC LIMIT $2';
    params = [cursor.id, limit + 1];
  } else {
    query = 'SELECT id, title, created_at FROM posts ORDER BY id ASC LIMIT $1';
    params = [limit + 1];
  }

  const { rows } = await pool.query(query, params);

  const hasNextPage = rows.length > limit;
  const results = hasNextPage ? rows.slice(0, limit) : rows;
  const nextCursor = hasNextPage
    ? encodeCursor({ id: results[results.length - 1].id })
    : null;

  res.json({
    data: results,
    pagination: {
      nextCursor,
      hasNextPage,
      limit
    }
  });
});

module.exports = router;

Why do we fetch limit + 1?

This is a common trick. By requesting one extra row, we can determine whether there is a next page without running a separate COUNT query. If we get back more rows than the requested limit, we know there are more results. We slice off the extra row before sending the response.

Step 3: Wire up the Express app

// app.js
const express = require('express');
const postsRouter = require('./routes/posts');
const app = express();

app.use('/api', postsRouter);

app.listen(3000, () => console.log('Server running on port 3000'));

Start the server and test:

curl "http://localhost:3000/api/posts?limit=5"

The response will look like:

{
  "data": [
    { "id": 1, "title": "First Post", "created_at": "2026-01-10T08:00:00Z" },
    { "id": 2, "title": "Second Post", "created_at": "2026-01-11T09:30:00Z" },
    ...
  ],
  "pagination": {
    "nextCursor": "eyJpZCI6NX0",
    "hasNextPage": true,
    "limit": 5
  }
}

To fetch the next page, pass the nextCursor value:

curl "http://localhost:3000/api/posts?limit=5&cursor=eyJpZCI6NX0"

Implementing Cursor-Based Pagination with MongoDB

The same pattern applies to MongoDB. Instead of a SQL WHERE id > $1, we use a Mongo filter with $gt.

// routes/posts-mongo.js
const express = require('express');
const router = express.Router();
const mongoose = require('mongoose');
const { encodeCursor, decodeCursor } = require('../cursor');
const Post = require('../models/Post'); // Mongoose model

const DEFAULT_LIMIT = 20;
const MAX_LIMIT = 100;

router.get('/posts', async (req, res) => {
  let limit = parseInt(req.query.limit, 10) || DEFAULT_LIMIT;
  limit = Math.min(limit, MAX_LIMIT);

  const cursor = req.query.cursor ? decodeCursor(req.query.cursor) : null;

  const filter = {};
  if (cursor && cursor._id) {
    filter._id = { $gt: new mongoose.Types.ObjectId(cursor._id) };
  }

  const rows = await Post.find(filter)
    .sort({ _id: 1 })
    .limit(limit + 1)
    .lean();

  const hasNextPage = rows.length > limit;
  const results = hasNextPage ? rows.slice(0, limit) : rows;
  const nextCursor = hasNextPage
    ? encodeCursor({ _id: results[results.length - 1]._id.toString() })
    : null;

  res.json({
    data: results,
    pagination: {
      nextCursor,
      hasNextPage,
      limit
    }
  });
});

module.exports = router;

Because MongoDB’s default _id field is a time-sortable ObjectId, it works naturally as a cursor key. If you sort by a different field, you will need a compound cursor (covered below).

Sorting by Non-Unique or Custom Fields (Compound Cursors)

What if you want to sort by created_at instead of id? Timestamps are not always unique, so two records might share the same value. The solution is a compound cursor that includes both the sort field and a tiebreaker (typically the primary key).

Example: Paginating by created_at in PostgreSQL

// Cursor payload: { created_at: '2026-03-15T10:00:00Z', id: 542 }

let query;
let params;

if (cursor) {
  query = `
    SELECT id, title, created_at FROM posts
    WHERE (created_at, id) > ($1, $2)
    ORDER BY created_at ASC, id ASC
    LIMIT $3
  `;
  params = [cursor.created_at, cursor.id, limit + 1];
} else {
  query = `
    SELECT id, title, created_at FROM posts
    ORDER BY created_at ASC, id ASC
    LIMIT $1
  `;
  params = [limit + 1];
}

The (created_at, id) > ($1, $2) syntax is a row value comparison supported by PostgreSQL. It efficiently leverages a composite index on (created_at, id).

Important: Make sure you create the matching index:

CREATE INDEX idx_posts_created_at_id ON posts (created_at ASC, id ASC);

Compound cursor in MongoDB

const filter = {};
if (cursor) {
  filter.$or = [
    { created_at: { $gt: new Date(cursor.created_at) } },
    {
      created_at: new Date(cursor.created_at),
      _id: { $gt: new mongoose.Types.ObjectId(cursor._id) }
    }
  ];
}

const rows = await Post.find(filter)
  .sort({ created_at: 1, _id: 1 })
  .limit(limit + 1)
  .lean();

Handling Edge Cases

Real-world APIs need to handle more than the happy path. Here are the most common edge cases and how to deal with them.

1. Invalid or tampered cursors

Never trust client input. If decodeCursor returns null or the decoded values fail validation, return a 400 Bad Request with a clear error message.

if (req.query.cursor) {
  const cursor = decodeCursor(req.query.cursor);
  if (!cursor || !cursor.id) {
    return res.status(400).json({ error: 'Invalid cursor' });
  }
}

2. Deleted records

One of the great advantages of cursor-based pagination is that it handles deleted records gracefully. Because the query uses a WHERE id > cursor_id condition, it simply returns whatever comes next, even if the original cursor record has been deleted.

3. Empty results

When the client reaches the end of the dataset, return an empty data array and set nextCursor to null. Make this contract explicit in your API documentation.

4. Backward pagination (previous page)

If you need to support “previous page” navigation, include a previousCursor in your response. This is the cursor of the first item in the current result set. The client can then request items before that cursor using a reversed comparison (< instead of >) and reversed sort order, then reverse the results in your application code before sending the response.

// For backward pagination
if (direction === 'backward' && cursor) {
  query = `
    SELECT id, title, created_at FROM posts
    WHERE id < $1
    ORDER BY id DESC
    LIMIT $2
  `;
  // Reverse the results before responding
  results.reverse();
}

5. Concurrent inserts

With offset pagination, a new row inserted during pagination can cause the next page to include a duplicate. With cursor-based pagination, as long as the cursor column is monotonically increasing (like an auto-increment ID or a ULID), new inserts will appear at the end and will not disturb the current traversal.

Structuring Your API Response

A clean, predictable response format helps API consumers integrate quickly. Here is a recommended structure:

{
  "data": [ ... ],
  "pagination": {
    "nextCursor": "eyJpZCI6MTIwfQ",
    "previousCursor": "eyJpZCI6MTAxfQ",
    "hasNextPage": true,
    "hasPreviousPage": true,
    "limit": 20
  }
}

You may also include totalCount if your use case requires it, but be aware that computing the total count can be expensive on large tables. Consider caching it or making it an optional field triggered by a query parameter like ?includeTotalCount=true.

Performance Comparison: Offset vs. Cursor

To illustrate the difference, here is a simplified benchmark on a PostgreSQL table with 10 million rows:

Query Type Page 1 (first 20 rows) Page 10,000 (rows 200,000+) Page 500,000 (rows 10,000,000+)
Offset (OFFSET 0 LIMIT 20) ~2 ms ~180 ms ~3,200 ms
Cursor (WHERE id > X LIMIT 20) ~2 ms ~2 ms ~2 ms

The cursor approach remains constant because the database can seek directly to the relevant row via the index, whereas offset forces the database to scan and skip all preceding rows.

Security Considerations

Even though cursors should be opaque, a determined user can decode a Base64 string. Keep these points in mind:

  • Never include sensitive data in the cursor payload. Stick to IDs and sort-field values.
  • Validate decoded cursor values server-side just like any other user input. Sanitize types and ranges.
  • If you need tamper-proof cursors, sign them with an HMAC. This adds a small overhead but prevents users from crafting arbitrary cursors to probe your data.
const crypto = require('crypto');
const SECRET = process.env.CURSOR_SECRET;

function encodeSignedCursor(payload) {
  const data = Buffer.from(JSON.stringify(payload)).toString('base64url');
  const signature = crypto.createHmac('sha256', SECRET).update(data).digest('base64url');
  return data + '.' + signature;
}

function decodeSignedCursor(cursorString) {
  const [data, signature] = cursorString.split('.');
  const expected = crypto.createHmac('sha256', SECRET).update(data).digest('base64url');
  if (signature !== expected) return null;
  try {
    return JSON.parse(Buffer.from(data, 'base64url').toString('utf8'));
  } catch {
    return null;
  }
}

Complete Working Example

Here is a full, self-contained Express app that you can clone and run against a PostgreSQL database:

// app.js
const express = require('express');
const { Pool } = require('pg');

const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const app = express();

function encodeCursor(payload) {
  return Buffer.from(JSON.stringify(payload)).toString('base64url');
}

function decodeCursor(str) {
  try {
    return JSON.parse(Buffer.from(str, 'base64url').toString('utf8'));
  } catch {
    return null;
  }
}

app.get('/api/posts', async (req, res) => {
  const MAX_LIMIT = 100;
  let limit = Math.min(parseInt(req.query.limit, 10) || 20, MAX_LIMIT);

  let cursor = null;
  if (req.query.cursor) {
    cursor = decodeCursor(req.query.cursor);
    if (!cursor || typeof cursor.id !== 'number') {
      return res.status(400).json({ error: 'Invalid cursor' });
    }
  }

  const query = cursor
    ? 'SELECT id, title, created_at FROM posts WHERE id > $1 ORDER BY id ASC LIMIT $2'
    : 'SELECT id, title, created_at FROM posts ORDER BY id ASC LIMIT $1';

  const params = cursor ? [cursor.id, limit + 1] : [limit + 1];
  const { rows } = await pool.query(query, params);

  const hasNextPage = rows.length > limit;
  const data = hasNextPage ? rows.slice(0, limit) : rows;
  const nextCursor = hasNextPage
    ? encodeCursor({ id: data[data.length - 1].id })
    : null;

  res.json({
    data,
    pagination: { nextCursor, hasNextPage, limit }
  });
});

app.listen(3000, () => console.log('Listening on port 3000'));

Best Practices Summary

  1. Always use an indexed column for the cursor field. Without an index, cursor pagination offers no advantage.
  2. Use compound cursors when sorting by non-unique fields.
  3. Fetch limit + 1 to determine hasNextPage without a count query.
  4. Keep cursors opaque. Encode them in Base64 and optionally sign them.
  5. Validate cursor input on every request.
  6. Document your pagination contract clearly so API consumers know what to expect.
  7. Set a maximum limit to prevent clients from requesting absurdly large pages.
  8. Consider backward pagination if your UI needs a “previous” button.

When to Use Offset Pagination Instead

Cursor-based pagination is not always the right choice. Stick with offset pagination when:

  • Your dataset is small and unlikely to grow beyond a few thousand rows.
  • Users need to jump to arbitrary page numbers (e.g., “go to page 15”).
  • The total number of pages must be displayed in the UI.

For most modern APIs that power mobile apps, SPAs, or data exports, cursor-based pagination is the stronger default.

Frequently Asked Questions

What is cursor-based pagination in a REST API?

Cursor-based pagination is a method where the server returns a pointer (cursor) to the last item in the current result set. The client sends this cursor back on the next request so the server can return results starting right after that item. It avoids the performance and consistency problems of offset-based pagination.

Why is cursor-based pagination faster than offset pagination?

With offset pagination, the database must scan and discard all rows before the offset. If you request page 10,000, the database processes 200,000 rows just to throw away 199,980 of them. Cursor pagination uses a WHERE clause on an indexed column, so the database seeks directly to the right row in constant time.

Can I use cursor-based pagination with any database?

Yes. The pattern works with SQL databases like PostgreSQL, MySQL, and SQLite, as well as NoSQL databases like MongoDB and DynamoDB. The key requirement is that you have an indexed, sortable field to use as the cursor.

How do I handle sorting with cursor-based pagination?

If you sort by a unique field (like a primary key), a simple single-value cursor works. If you sort by a non-unique field (like a date), use a compound cursor that includes both the sort field and a unique tiebreaker field.

Should I encrypt my cursors?

Encryption is usually unnecessary. Base64 encoding makes cursors opaque enough for most use cases. If you need to prevent users from tampering with cursors to manipulate queries, sign the cursor with HMAC rather than encrypting it. This is lighter and still provides integrity verification.

How do I implement a “previous page” with cursor-based pagination?

Include a previousCursor (typically the ID of the first item in the current page) in your response. When the client requests the previous page, reverse the comparison operator and sort direction in your query, then reverse the returned results before sending the response.

What is the difference between cursor-based pagination and keyset pagination?

They are the same thing. “Keyset pagination” is the term more commonly used in database literature, while “cursor-based pagination” is more common in API design contexts. Both refer to paginating by filtering on the last-seen value of an ordered column.