Back to Blog
System Architecture

Designing a Multi-Tenant SaaS Architecture in Laravel: Lessons from Production

May 20, 2025
LaravelSaaSArchitectureMulti-tenancyPHP

Designing a Multi-Tenant SaaS Architecture in Laravel: Lessons from Production

When we built the multi-tenant version of our marketplace platform, we had to choose a tenancy strategy early — and that choice shapes everything downstream.

The Three Options We Evaluated

**1. Shared database, shared schema** — All tenants in one table, filtered by `tenant_id`. Cheapest to operate, hardest to isolate. One bad query can leak data across tenants. We ruled this out for a B2B product where clients have contractual data isolation requirements.

**2. Shared database, separate schemas** — One schema per tenant in the same Postgres instance. Good middle ground, but Laravel's Eloquent doesn't handle schema-switching elegantly without a package like `stancl/tenancy`.

**3. Database per tenant** — Separate MySQL database per tenant. Maximum isolation, straightforward backup/restore per client, easy to migrate a single tenant. The cost: connection pool management and migration orchestration.

What We Chose and Why

We went with database-per-tenant using `stancl/tenancy` v3. The deciding factor was a client requirement for GDPR data residency — they needed their data on a specific server. Shared schema makes that impossible.

The Operational Reality

Running migrations across 200+ tenant databases requires a reliable orchestration script. We built a Laravel command that runs migrations in batches of 20 with retry logic and Slack alerts on failure. It takes ~4 minutes to migrate all tenants.

Connection pooling is the other challenge. With 200 tenants, you can't keep 200 persistent connections open. We use lazy connections and a 30-second idle timeout.

What I'd Do Differently

Start with `stancl/tenancy` from day one rather than retrofitting it. The package's central/tenant context switching is elegant, but adding it to an existing codebase with 300+ models is painful.