You don’t need to design every endpoint. Pick the four or five that map to your functional requirements and show you’ve thought about pagination, idempotency, and auth.
The endpoints
POST /v1/tweets
GET /v1/users/{id}/tweets?cursor=...&limit=20
GET /v1/timeline/home?cursor=...&limit=20
POST /v1/users/{id}/follow
DELETE /v1/users/{id}/follow
POST /v1/tweets/{id}/like
Auth is implicit — every request carries a bearer token; the server resolves user_id from it. Don’t take a user_id in the request body for a “post tweet” call — if the server trusts it instead of the token, anyone can post as anyone else by swapping the ID.
POST /v1/tweets
POST /v1/tweets
Idempotency-Key: 7f3a...
{
"text": "hello",
"media_ids": ["m_abc", "m_def"]
}
→ 201 { "id": "t_123", "created_at": "..." }
Two things to call out:
- Media is uploaded separately, to a presigned S3 URL. The tweet endpoint only references
media_ids. This keeps the write path small and lets the CDN serve media directly. - Idempotency-Key prevents duplicate tweets when the client retries on a flaky network. The server stores the key for ~24h and returns the original response on retry.
GET /v1/timeline/home
GET /v1/timeline/home?cursor=eyJ0cyI6...&limit=20
→ {
"tweets": [...],
"next_cursor": "eyJ0cyI6..."
}
Cursor-based pagination, not offset. Offset means “skip N, take the next batch” — ?page=2&size=2 → skip 2, return items 3–4. That works on a static list, but a timeline isn’t static. Say the tweets are [A, B, C, D, E] (newest first) and you fetch page 1 → [A, B]. Before you fetch page 2, a new tweet Z arrives, making the list [Z, A, B, C, D, E]. Page 2 (skip 2) now returns [B, C] — you see B twice. A deletion causes the opposite: you skip a tweet.
A cursor pins the page to a specific point in the data, not a shifting index. The server hands back next_cursor with each page, encoding something like (timestamp, tweet_id) — “older than tweet B.” Page 2 asks for tweets older than B and gets [C, D] regardless of what arrived at the top. The tweet_id is a tiebreaker for tweets sharing a timestamp.
The cursor is base64-opaque so clients don’t parse it — the server can change what’s inside (add a shard ID, switch sort keys) without breaking anyone.
Follow / unfollow
POST /v1/users/{id}/follow → 204
DELETE /v1/users/{id}/follow → 204
Idempotent by definition — following someone you already follow is a no-op, not a 409. Same for unfollow.
Rate limiting
Mention it briefly. Per-user token bucket: e.g. 300 tweets/3h, 1000 follows/day. Enforced at the API gateway, not in the application — you don’t want a runaway script taking down the write path.
Things not to do
- Don’t return raw DB rows. Use a stable response schema you control.
- Don’t expose internal IDs as auto-increment integers. Use opaque strings (Snowflake IDs work — more on that in the storage post).
The API is the contract. Everything behind it can change; the shape of these requests can’t.