Back
auth|webauthn|architecture9 min read

Unified Auth with Passkeys Across Multiple Apps

I had a common problem: multiple web applications under the same organization, each with its own authentication. One app used Supabase, another had its own Cognito pool, a third was halfway through migrating. Users signed in separately to each. Credentials were scattered. When someone left the org, revoking access meant visiting three different dashboards and hoping you didn't miss one.

I wanted one auth service. One user pool. One sign-in. And I wanted passkeys that work across every app without users having to register per-subdomain.

This is how it came together.

Starting from Zero with CDK

The first decision was infrastructure-as-code from day one. I set up an AWS CDK project in TypeScript with separate stacks split by concern. A VPC stack handles networking. A database stack provisions Aurora Serverless v2 PostgreSQL. The auth service stack wires up the Lambda, API Gateway, custom domain, and secrets. Each stack is independent and reusable. The whole thing runs on minimal resources - no NAT Gateway, Aurora Serverless scaling to zero when idle, Lambda on-demand only.

The key design was an environment loop in CDK's entry point that reads context from cdk.json. Every environment (dev, staging, prod) lives in its own AWS account, but the stacks are identical. You change the context, not the code. This means deploying to production is the same cdk deploy command with a different profile. No special scripts, no manual steps, no drift between environments.

typescript
// cdk.json drives everything
{
  "dev": {
    "account": "797098543009",
    "authDomain": "dev.auth.example.org",
    "trustedOrigins": ["http://localhost:3000"]
  },
  "prod": {
    "account": "123456789012",
    "authDomain": "auth.example.org",
    "trustedOrigins": ["https://app.example.org"]
  }
}

CDK also handles custom domains. The auth service gets its own subdomain with an ACM certificate (DNS-validated) and API Gateway mapping. When authDomain is set in context, CDK creates all of it. When it's not, it skips it. The same code works for a fresh dev environment with no domain and a production deployment with full DNS.

The Auth Service

The auth service itself is surprisingly lean. A single Lambda function running Hono as the HTTP framework and Better Auth for the authentication logic. That's it. No Express, no Fastify, no framework bloat. Hono compiles to a tiny bundle, cold starts are fast, and the routing is clean.

Better Auth turned out to be a great fit. It handles Google OAuth, session management, organizations with roles and permissions, and passkeys through a plugin system. The plugin architecture means I only ship the code I actually use. No kitchen-sink auth framework with features I'll never touch.

Click an app to trace a request

Hono + Better Auth
auth.example.org
PostgreSQL
Idle - click an app to start

Every frontend app proxies its /api/auth/* routes to this Lambda. The proxy pattern keeps cookies first-party (critical for OAuth) and makes the auth service invisible to users. They never see auth.example.org in their browser.

Passkeys Across Subdomains

The part I was most excited about: passkeys. WebAuthn lets you replace passwords with device-bound credentials. Touch ID, Face ID, security key, whatever the device supports. The credentials sync via iCloud Keychain or Google Password Manager, so losing a device isn't a lockout.

The trick is the Relying Party ID. This is the domain WebAuthn binds credentials to. Set it to the parent domain (example.org) and a passkey registered on app-one.example.org works on app-two.example.org automatically. One registration, every app.

But there's a tension with local development. example.org isn't a valid rpID for localhost. Hardcoding it breaks dev. Omitting it entirely makes passkeys per-subdomain (not unified). After two failed attempts, I landed on a conditional approach:

typescript
passkey({
  rpName: 'Acme Auth',
  ...(process.env.PASSKEY_RP_ID && {
    rpID: process.env.PASSKEY_RP_ID
  }),
})

In production, the Lambda gets PASSKEY_RP_ID=example.org from CDK. In local dev, the env var isn't set, so Better Auth derives the rpID from the request. Clean separation. The same code runs everywhere.

One thing I learned the hard way: changing the rpID after users have registered passkeys invalidates every existing credential. There's no migration path. This is a WebAuthn protocol constraint. Decide early.

Security Without Complexity

Since the auth service sits behind API Gateway, anyone can technically call it directly and spoof request headers. I added a small piece of Hono middleware that validates the origin against an allowlist before any passkey endpoint is hit:

typescript
const ALLOWED_RP_IDS = (
  process.env.ALLOWED_RP_IDS || 'localhost,example.org'
).split(',');

app.on(
  ['POST', 'GET'],
  '/api/auth/passkey/*',
  validateOrigin
);

In dev, the allowlist includes localhost. In prod, it doesn't. CDK manages this through environment context, same as everything else.

WebAuthn itself prevents cross-domain credential abuse at the browser level. The middleware is defense-in-depth, not the primary security boundary.

Backend Authorization

Authentication is only half the problem. Backend services need to know who's making the request and what they're allowed to do.

Better Auth has a JWT plugin that issues short-lived tokens with custom claims. I embed the user's organization, role, and permissions directly in the JWT. A shared Lambda Authorizer in the CDK project verifies these tokens locally using JWKS. The verification is pure crypto, about 1ms, no HTTP call to the auth service required.

JWT Authorization Flow

User signs in
Google / Passkey
Session created
HttpOnly cookie set
Frontend requests JWT
authClient.token()
JWT issued
userId, orgId, role, permissions
Lambda Authorizer
JWKS verify ~1ms, zero network calls
Backend reads claims
Zero auth code needed

JWTs expire after 15 minutes. If access is revoked, existing tokens still work until expiry, but the session is killed server-side so no new tokens can be issued. I considered a token blacklist for instant revocation but decided the 15-minute window was an acceptable tradeoff against the complexity.

The authorizer Lambda ARN is shared across stacks via SSM Parameter Store. Any new app stack just reads the parameter and attaches the authorizer to its API Gateway routes. Zero auth code in backends.

Organizations as App Boundaries

One design choice that paid off: I repurposed Better Auth's organizations plugin as the separation layer between apps. The plugin was designed for multi-tenant SaaS where one user belongs to multiple companies. But it works just as well when each "organization" represents an application in your ecosystem.

Each app gets its own organization with its own role definitions. The hiring tool has roles like Hiring Admin and Recruiter. The time tracking app has Super Admin, Manager, and User. A single person can be a Recruiter in one app and a Manager in the other, because roles are scoped to the organization, not the user.

When a frontend calls setActive({ organizationSlug: "hiring" }), the session switches context. The next JWT includes that organization's roles and permissions. The backend doesn't need to know about any other app's permission model. It just reads the claims.

This also makes access control straightforward. When someone leaves the company, you remove them from all organizations in one script. When a new app launches, you create a new organization, define its roles, and seed the membership. No changes to the auth service code. No new database tables. Just data.

The alternative would've been building a custom RBAC layer on top of the auth service. The organizations plugin gave me multi-app role isolation, team grouping, and dynamic permissions out of the box. Sometimes the best architecture is the one where you bend an existing abstraction instead of building a new one.

The Cookie Problem

The hardest part of the entire project wasn't passkeys or JWTs. It was cookies.

OAuth requires a state cookie during the sign-in flow. When the auth service lives on a different domain, SameSite policies make cookie handling miserable. I tried SameSite=None; Secure. Inconsistent across browsers. I tried setting cookies on the API Gateway domain. Callback redirects broke.

The fix was the proxy pattern I mentioned earlier. Every frontend has a catch-all route at /api/auth/[...all] that forwards requests to the Lambda. Because the browser only ever talks to the frontend's own domain, all cookies stay first-party. OAuth callbacks go through the proxy. The proxy sets x-forwarded-host and x-forwarded-proto so Better Auth can reconstruct the correct origin for CORS and rpID derivation.

This was a full day of debugging that would've been five minutes if I'd started with the proxy pattern.

Secrets and Cold Starts

Secrets management is another area where getting it right early pays off. Initially I used CDK's unsafeUnwrap() to resolve secrets at deploy time and pass them as Lambda environment variables. This puts credentials in plaintext in the CloudFormation template and the Lambda console.

I switched to passing only secret ARNs as env vars. The Lambda fetches actual values from Secrets Manager at cold start via parallel Promise.all() calls. The auth config is cached module-level so it's only initialized once per Lambda instance.

typescript
// Cold start: fetch all secrets in parallel
const [dbCreds, authSecret, oauthCreds] =
  await Promise.all([
    getJsonSecret(requireEnv('DB_CREDS_ARN')),
    getSecret(requireEnv('AUTH_SECRET_ARN')),
    getJsonSecret(requireEnv('OAUTH_SECRET_ARN')),
  ]);

Adds 100-200ms to cold start. Worth it to never have credentials exposed in CloudFormation.

Migration

I had 100+ users in a Supabase project. The migration script parses the pg_dump, extracts user records and OAuth identity links, and inserts into Better Auth's schema. The main gotcha: different auth systems assign different UUIDs to the same user. Always map by email, never by provider ID.

The migration script is idempotent (ON CONFLICT DO NOTHING), so running it twice is harmless.

Dev vs. Prod

The same CDK code deploys both environments. The differences live entirely in cdk.json context:

Config Dev Prod
rpID example.org example.org
Allowed origins includes localhost only *.example.org
Trusted origins includes localhost:3000 only production frontends
Auth domain dev.auth.example.org auth.example.org

Deploying a new environment is: add a context block, run cdk deploy. The VPC, database, auth service, custom domain, certificates, and DNS records all come up from the same stack definitions. No snowflakes.

What I Ended Up With

One auth service. 100+ users migrated. Three apps sharing a single user pool. Passkeys that work across every subdomain. Backend authorization with zero auth code in application lambdas. Dev and prod deployed from the same CDK stacks.

The total infrastructure cost is near-zero during development. Aurora Serverless v2 scales to 0 ACUs when idle, so I only pay for the seconds the database is actually handling queries. Lambda is pay-per-invocation. API Gateway charges per request. During active development it runs a few dollars a month. During quiet weeks it's essentially free. No always-on servers. No auth SaaS subscription.

The whole thing took about a week, and most of that time was the cookie/CORS debugging. If I did it again, I'd start with the proxy pattern and save myself a day.