Response Caching

The below examples uses Vercel’s edge caching to serve data to your users as fast as possible.

⚠️ A word of caution ⚠️

Always be careful with caching - especially if you handle personal information.

Since batching is enabled by default, it’s recommended to set your cache headers in the responseMeta-function and make sure that there are not any concurrent calls that may include personal data - or to omit cache headers completely if there is an auth headers or cookie.

You can also use a splitLink to split your requests that are public and those that should be private and uncached.

App Caching

If you turn on SSR in your app you might discover that your app loads slow on for instance Vercel, but you can actually statically render your whole app without using SSG; read this Twitter thread for more insights.

Example code

  1. // in _app.tsx
  2. export default withTRPC({
  3. config({ ctx }) {
  4. if (process.browser) {
  5. return {
  6. url: '/api/trpc',
  7. };
  8. }
  9. const url = process.env.VERCEL_URL
  10. ? `https://${process.env.VERCEL_URL}/api/trpc`
  11. : 'http://localhost:3000/api/trpc';
  12. return {
  13. url,
  14. };
  15. },
  16. ssr: true,
  17. responseMeta({ ctx, clientErrors }) {
  18. if (clientErrors.length) {
  19. // propagate http first error from API calls
  20. return {
  21. status: clientErrors[0].data?.httpStatus ?? 500,
  22. };
  23. }
  24. // cache request for 1 day + revalidate once every second
  25. const ONE_DAY_IN_SECONDS = 60 * 60 * 24;
  26. return {
  27. headers: {
  28. 'cache-control': `s-maxage=1, stale-while-revalidate=${ONE_DAY_IN_SECONDS}`,
  29. }
  30. };
  31. },
  32. })(MyApp);

API Response caching

Since all queries are normal HTTP GETs we can use normal HTTP headers to cache responses, make the responses snappy, give your database a rest, and easier scale your API to gazillions of users.

Using responseMeta to cache responses

Assuming you’re deploying your API somewhere that can handle stale-while-revalidate cache headers like Vercel.

  1. import * as trpc from '@trpc/server';
  2. import { inferAsyncReturnType } from '@trpc/server';
  3. import * as trpcNext from '@trpc/server/adapters/next';
  4. export const createContext = async ({
  5. req,
  6. res,
  7. }: trpcNext.CreateNextContextOptions) => {
  8. return {
  9. req,
  10. res,
  11. prisma,
  12. };
  13. };
  14. type Context = inferAsyncReturnType<typeof createContext>;
  15. export function createRouter() {
  16. return trpc.router<Context>();
  17. }
  18. const waitFor = async (ms: number) =>
  19. new Promise((resolve) => setTimeout(resolve, ms));
  20. export const appRouter = createRouter()
  21. .query('public.slow-query-cached', {
  22. async resolve({ ctx }) {
  23. await waitFor(5000); // wait for 5s
  24. return {
  25. lastUpdated: new Date().toJSON(),
  26. };
  27. },
  28. });
  29. // Exporting type _type_ AppRouter only exposes types that can be used for inference
  30. // https://www.typescriptlang.org/docs/handbook/release-notes/typescript-3-8.html#type-only-imports-and-export
  31. export type AppRouter = typeof appRouter;
  32. // export API handler
  33. export default trpcNext.createNextApiHandler({
  34. router: appRouter,
  35. createContext,
  36. responseMeta({ ctx, paths, type, errors }) {
  37. // assuming you have all your public routes with the kewyord `public` in them
  38. const allPublic =
  39. paths && paths.every((path) => path.includes('public'));
  40. // checking that no procedures errored
  41. const allOk = errors.length === 0;
  42. // checking we're doing a query request
  43. const isQuery = type === 'query';
  44. if (ctx?.res && allPublic && allOk && isQuery) {
  45. // cache request for 1 day + revalidate once every second
  46. const ONE_DAY_IN_SECONDS = 60 * 60 * 24;
  47. return {
  48. headers: {
  49. 'cache-control': `s-maxage=1, stale-while-revalidate=${ONE_DAY_IN_SECONDS}`,
  50. },
  51. };
  52. }
  53. return {};
  54. },
  55. });