Alexander Garcia
A complete guide to hand-writing your own OAuth 2.0 PKCE library in JavaScript — from crypto utilities to the authorize request to the token exchange.
Read time is about 17 minutes
Alexander Garcia is an effective JavaScript Engineer who crafts stunning web experiences.
Alexander Garcia is a meticulous Web Architect who creates scalable, maintainable web solutions.
Alexander Garcia is a passionate Software Consultant who develops extendable, fault-tolerant code.
Alexander Garcia is a detail-oriented Web Developer who builds user-friendly websites.
Alexander Garcia is a passionate Lead Software Engineer who builds user-friendly experiences.
Alexander Garcia is a trailblazing UI Engineer who develops pixel-perfect code and design.
Most developers reach for a library when implementing OAuth 2.0. That's usually the right call. But when I was building the authentication system for VA.gov, we couldn't use an off-the-shelf solution — we needed a custom OAuth 2.0 PKCE SDK that worked with our own authorization server (the Sign-in Service), handled edge cases specific to government authentication, and gave us full control over the security layer.
So I hand-wrote the entire PKCE flow in JavaScript: the cryptographic utilities, the /authorize request builder, and the /token exchange. This post walks through all three steps with the actual production patterns we used. No libraries, no abstractions — just the Web Crypto API, fetch, and URL.
For most projects, you shouldn't. Libraries like oidc-client-ts handle the complexities of OAuth and OpenID Connect well. But there are legitimate reasons to go custom:
At VA.gov, all four reasons applied. The result was a lightweight PKCE SDK that powered 200M+ authentications across web and mobile.
Every PKCE flow starts with three cryptographic values: a code verifier (random secret), a code challenge (SHA-256 hash of the verifier), and a state (CSRF protection). Let's build each one.
The code verifier is a cryptographically random string between 43 and 128 characters. It's the secret that only the client knows — it gets stored locally and sent to the token endpoint later to prove the client that started the flow is the same one finishing it.
const generateCodeVerifier = () => { const PREFERRED_BYTE_LENGTH = 48; const webCrypto = getWebCrypto(); if (webCrypto?.subtle) { const arr = new Uint8Array(PREFERRED_BYTE_LENGTH); webCrypto.getRandomValues(arr); return base64UrlEncode(arr); } else { // Node fallback const nodeCrypto = require("crypto"); return nodeCrypto.randomBytes(PREFERRED_BYTE_LENGTH).toString("base64url"); } };
Note the dual-path approach: crypto.getRandomValues() for browsers, crypto.randomBytes() for Node.js. This was important for us because the same crypto library ran in the browser (VA.gov) and in the test suite (Node/Jest).
// Unit test — minimum 43 characters & maximum of 128 characters describe("generateCodeVerifier", () => { it("should generate a 32-byte base64url encoded string", () => { const codeVerifier = cryptoLib.generateCodeVerifier(); expect(codeVerifier).toMatch(/^[A-Za-z0-9-_.~]{64}$/); expect(codeVerifier.length >= 43).toBeTruthy(); expect(codeVerifier.length <= 128).toBeTruthy(); }); });
The code challenge is a SHA-256 hash of the code verifier, base64url-encoded. This is what gets sent to the authorization server during /authorize. The server stores it, and later verifies it against the original code verifier during the /token exchange.
const generateCodeChallenge = async (codeVerifier) => { if (!codeVerifier) return null; const webCrypto = getWebCrypto(); if (webCrypto?.subtle) { return base64UrlEncode( await webCrypto.subtle.digest("SHA-256", stringToBuffer(codeVerifier)) ); } else { // Node fallback const nodeCrypto = require("crypto"); const shaHash = nodeCrypto.createHash("sha256"); shaHash.update(stringToBuffer(codeVerifier)); return shaHash.digest("base64url"); } };
The test uses the exact code verifier and code challenge pair from RFC 7636 — the PKCE specification itself. If your implementation produces the same output for this input, it's correct.
// Code verifier + Code Challenge directly taken from https://datatracker.ietf.org/doc/html/rfc7636 const codeVerifier = "dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk"; const codeChallenge = "E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM"; describe("generateCodeChallenge", () => { it("should generate the matching code challenge for a given code verifier", async () => { expect(await cryptoLib.generateCodeChallenge(codeVerifier)).toEqual( codeChallenge ); }); it("should not generate a code challenge if no code verifier parameter passed", async () => { expect(await cryptoLib.generateCodeChallenge()).toBeNull(); }); });
The state parameter prevents CSRF attacks. It's a random string generated before /authorize and validated on the callback. If the state doesn't match, someone tampered with the flow.
const generateRandomString = (length = 24) => { if (length === 0) return null; const webCrypto = getWebCrypto(); if (webCrypto?.subtle) { const buffer = new Uint8Array(Math.ceil(length / 2)); webCrypto.getRandomValues(buffer); return Array.from(buffer, (byte) => byte.toString(16).padStart(2, "0") ).join(""); } else { // Node fallback const nodeCrypto = require("crypto"); return nodeCrypto .randomBytes(Math.ceil(length / 2)) .toString("hex") .slice(0, length); } };
describe("generateRandomString", () => { it("should generate a secure random string", () => { expect(cryptoLib.generateRandomString()).toMatch(/^[A-Za-z0-9-_]{24}$/); }); it("should return null if length is 0", () => { expect(cryptoLib.generateRandomString(0)).toBe(null); }); });
Both the code verifier and code challenge rely on two small utilities for encoding:
function stringToBuffer(string) { if (!string || string.length === 0) return null; const buffer = new Uint8Array(string.length); for (let i = 0; i < string.length; i++) { buffer[i] = string.charCodeAt(i) & 0xff; } return buffer; } function base64UrlEncode(input) { if (!input || input.length === 0) return null; const inputType = typeof input === "string" ? input : String.fromCharCode(...new Uint8Array(input)); return btoa(inputType) .replace(/\+/g, "-") .replace(/\//g, "_") .replace(/=+$/, ""); }
The base64UrlEncode function is worth calling out — standard Base64 uses +, /, and = which aren't URL-safe. Base64url replaces these with -, _, and strips the padding. This is required by the PKCE spec.
With our crypto utilities in place, we can build the /authorize URL. This is the request that kicks off the entire OAuth flow — it redirects the user to the authorization server where they authenticate (via Login.gov, ID.me, etc.), and the server redirects back with an authorization code and the original state.
export const login = async () => { // Generate state & codeVerifier, save them in localStorage const { state, codeVerifier } = generateHashes(); const codeChallenge = await crypto.generateCodeChallenge(codeVerifier); const url = new URL(OAUTH_API_URL.AUTHORIZE); const oAuthParams = { [OAUTH_KEYS.CLIENT_ID]: encodeURIComponent(CLIENT_ID), [OAUTH_KEYS.RESPONSE_TYPE]: OAUTH_VALUES.CODE, [OAUTH_KEYS.STATE]: state, [OAUTH_KEYS.CODE_CHALLENGE]: codeChallenge, [OAUTH_KEYS.CODE_CHALLENGE_METHOD]: OAUTH_VALUES.S256, [OAUTH_KEYS.REDIRECT_URI]: OAUTH_VALUES.REDIRECT_URI, }; Object.keys(oAuthParams).forEach((param) => url.searchParams.append(param, oAuthParams[param]) ); window.location.assign(url.href); };
A few things to note about this implementation:
generateHashes() wraps the crypto functions from Part 1 and handles saving state and codeVerifier to localStorage in one callencodeURIComponent on the client_id prevents injection through malformed client identifierswindow.location.assign triggers a full-page navigation to the authorization server — this is intentional. OAuth requires leaving your app entirely so the user authenticates on the identity provider's domainsearchParams.append rather than string concatenation, which avoids encoding bugswindow.location.assign is notoriously hard to mock in unit tests. As an alternative, you can split the function into two parts — one that builds and returns the URL, and one that navigates. Test the URL builder directly, then test the navigation separately by watching for a route change. This is the pattern we used at VA.gov and it made the test suite significantly cleaner.
If your /authorize URL gets too long (which can happen with many parameters or long code_challenge values), look into Pushed Authorization Requests (PAR). PAR lets you POST the authorization parameters to the server first, get back a request_uri, and pass only that URI in the /authorize redirect. It's out of scope for this post, but worth knowing about.
After the user authenticates, the authorization server redirects back to your app with two query parameters: code (the authorization code) and state (which must match the original). Now we exchange the code for an access token.
First, we build the /token URL. This is where PKCE comes full circle — we send the original code_verifier (stored in localStorage during Part 2) alongside the authorization code. The server hashes the verifier, compares it to the stored challenge, and if they match, issues the tokens.
export function buildTokenRequest({ code, redirectUri = `${environment.BASE_URL}`, } = {}) { const codeVerifier = localStorage.getItem(OAUTH_KEYS.CODE_VERIFIER); // Don't build the token URL if we're missing required values if (!code || !codeVerifier) return null; const clientId = sessionStorage.getItem(OAUTH_KEYS.CLIENT_ID); const oAuthParams = { [OAUTH_KEYS.GRANT_TYPE]: "authorization_code", [OAUTH_KEYS.CLIENT_ID]: encodeURIComponent(clientId), [OAUTH_KEYS.REDIRECT_URI]: encodeURIComponent(redirectUri), [OAUTH_KEYS.CODE]: code, [OAUTH_KEYS.CODE_VERIFIER]: codeVerifier, }; const url = new URL(`${OAUTH_API_URL.TOKEN}`); Object.keys(oAuthParams).forEach((param) => url.searchParams.append(param, oAuthParams[param]) ); return url; }
The guard clause at the top is important — if either the code or codeVerifier is missing, something went wrong in the flow and we shouldn't attempt the exchange. At VA.gov this saved us from sending malformed requests that would pollute our error logs.
With the URL built, we make the POST request. Notice credentials: 'include' — this ensures cookies (like the session cookie set by the authorization server) are sent with the request, which is critical for cross-origin token exchanges.
export const requestToken = async ({ code, redirectUri, csp }) => { const url = buildTokenRequest({ code, redirectUri }); if (!url) return null; const response = await fetch(url.toString(), { method: "POST", credentials: "include", }); // Clean up localStorage after a successful exchange if (response.ok) { removeStateAndVerifier(); } return response; };
After a successful exchange, we call removeStateAndVerifier() to clean up localStorage. The state and code_verifier are single-use values — leaving them around is both a security risk (replay attacks) and a bug waiting to happen if the user triggers another login flow.
The callback handler ties everything together. When the user lands on the redirect URI, we validate the state, execute the token exchange, and handle errors:
const handleTokenRequest = async ({ code, state, csp, generateOAuthError }) => { // Validate state matches the original request if ( !localStorage.getItem(OAUTH_KEYS.STATE) || localStorage.getItem(OAUTH_KEYS.STATE) !== state ) { generateOAuthError({ oauthErrorCode: AUTH_ERRORS.OAUTH_STATE_MISMATCH.errorCode, event: OAUTH_ERRORS.OAUTH_STATE_MISMATCH, }); } else { // State matches — proceed with token exchange const response = await requestToken({ code, csp }); // Handle token exchange failure if (!response.ok) { const data = await response?.json(); const oauthErrorCode = OAUTH_ERROR_RESPONSES[data?.errors]; const event = OAUTH_EVENTS[data?.errors] ?? OAUTH_EVENTS.ERROR_DEFAULT; generateOAuthError({ oauthErrorCode, event }); } } };
The state validation is the first thing that happens — before any network request. If the state doesn't match, we throw an error immediately. This prevents authorization code injection attacks where an attacker crafts a callback URL with a stolen code but can't replicate the original state.
The error handling maps server error responses to specific error codes and events. In production at VA.gov, these fed into Datadog monitoring so we could track exactly which step in the OAuth flow was failing and for which credential service provider (Login.gov, ID.me, etc.).
Here's how all three parts connect:
login() generates state + code_verifier, derives the code_challenge, saves the first two to localStorage, and redirects to /authorize with the challenge?code=xxx&state=yyyhandleTokenRequest() validates the state against localStoragebuildTokenRequest() constructs the /token URL with the original code_verifierrequestToken() POSTs to the token endpoint — the server hashes the verifier, compares it to the stored challenge, and issues tokensremoveStateAndVerifier() clears the single-use values from localStorageThe beauty of PKCE is that even if an attacker intercepts the authorization code in step 3, they can't exchange it for tokens without the code_verifier — which never left the client.
Building OAuth PKCE from scratch taught me more about authentication security than any documentation or tutorial ever could. You understand why each parameter exists, what each cryptographic step protects against, and where the real attack surface lives.
That said, don't build this yourself unless you have a good reason to. For most applications, a well-maintained library is the safer choice. But if you're operating at scale, working in a high-security environment, or just want to deeply understand the protocol — this is how you do it.
If you want to understand the conceptual differences between PKCE and Private Key JWT (the other authentication method we used at VA.gov), check out my post on OAuth 2.0 PKCE vs Private Key JWT. And for the full story of how this SDK fit into VA.gov's authentication architecture, see Five Years to Launch: The Sign-in Service Story.