TenantHawk — Complete guide

From first bootstrap to go-live, without friction.

Complete operational guide: understand the architecture, execute steps in the right order, and reach production with full control on security, performance and scalability. No fluff, only what is truly useful.

MIT open source template created by Alessio Quagliara

Ready in 30-45 min9 operational sectionsZero fluff

00

The stack at a glance

Every piece has a precise role. Nothing is there to impress: everything is there to work.

TechnologyRole
FastAPI (async)Backend + Admin SSR
SQLAlchemy 2 async + asyncpgORM + connection pool
PostgreSQL 16Data persistence
Redis 7Sliding-window session store
Next.jsMarketing site + SEO
Jinja2 + HTMXAdmin UI without SPA overhead
Traefik v3Reverse proxy + automatic TLS
StripeCheckout + billing webhook
n8nWorkflow automation + AI agent
LiteLLMMulti-provider LLM proxy
AlembicDB schema migrations
k6Load testing
The FastAPI backend serves both JSON APIs and the SSR admin with Jinja2. There is no separate SPA for admin: this reduces attack surface, removes CORS complexity and simplifies session handling.

01

Where TenantHawk performs best

Template built for real B2B multi-tenancy: each customer is an isolated account with users, roles and subscription.

Ideal scenarios

ScenarioWhy it fits
B2B Management SaaSEach customer is a tenant with roles, plan and billing already wired.
AI automations for agenciesn8n + LiteLLM with isolated workflows per customer.
Multi-client CMSEditorial teams per tenant with granular RBAC.
B2B marketplace MVPSUPPLIER and CUSTOMER roles already mapped in the domain.
Multi-organization LMSSchools and companies as isolated tenants with subscriptions.

Less fitting scenarios

ScenarioWhy not
Pure B2C consumer appMulti-tenancy becomes unnecessary overhead.
Realtime editor like FigmaYou need collaboration-first architecture, not plain REST.
Massive realtime productREST is not enough, dedicated WebSockets are required.
Mobile-only appThe Next.js marketing layer becomes secondary.

02

Local development: from zero to running

Minimum path to start quickly. Every command has a purpose in the correct order.

1. .env file for development

bash
# PostgreSQL locale
POSTGRES_USER=dev_user
POSTGRES_PASSWORD=dev_password
POSTGRES_DB=dev_db
APP_DATABASE_URL=postgresql+asyncpg://dev_user:dev_password@db:5432/dev_db

# Redis
APP_REDIS_URL=redis://redis:6379

# Dev keys (non sicure, solo per sviluppo)
APP_SECRET_KEY=dev_secret_key_qualsiasi
APP_N8N_ENCRYPTION_KEY=dev_n8n_key_qualsiasi
APP_LITELLM_MASTER_KEY=dev_litellm_key

# Stripe TEST
APP_STRIPE_SECRET_KEY=sk_test_xxxxxxxxxx
APP_STRIPE_PUBLISHABLE_KEY=pk_test_xxxxxxxxxx
APP_STRIPE_WEBHOOK_SECRET=whsec_test_xxxxxxxxxx
APP_STRIPE_PRICE_BASE=price_test_xxx
APP_STRIPE_PRICE_PRO=price_test_xxx
APP_STRIPE_PRICE_COMPANY=price_test_xxx

# Backend dev
APP_HOST=0.0.0.0
APP_PORT=8000
APP_RELOAD=true    # ← hot reload attivo
APP_WORKERS=1

2. Full bootstrap

bash
git clone https://github.com/AlessioQuagliara/SaaS_Template.git
cd SaaS_Template
cp .env.example .env
# Edita .env con i valori dev sopra

# Avvia lo stack completo
docker compose up --build

# Applica migrazioni DB (solo al primo avvio)
docker compose exec backend alembic revision --autogenerate -m "Inizializza"
docker compose exec backend alembic upgrade head

# Crea primo tenant + admin
docker compose exec backend python -m app.cli seed tenant-admin \
  --slug dev \
  --nome-tenant "Tenant Dev" \
  --admin-email dev@dev.it \
  --admin-password "Dev123!"

Local endpoints

ServiceURL
Admin backendhttp://admin.localhost
Landing frontendhttp://www.localhost
Traefik dashboardhttp://localhost:8080
LiteLLM UIhttp://litellm.localhost
n8nhttp://n8n.localhost
API Docs (Swagger)http://admin.localhost/docs

Daily workflow

bash
# Le modifiche al codice backend si ricaricano automaticamente (APP_RELOAD=true)

# Nuova migration dopo aver modificato un model
docker compose exec backend alembic revision --autogenerate -m "Aggiunge campo X"
docker compose exec backend alembic upgrade head

# Log in tempo reale
docker compose logs -f backend

# Shell nel container backend
docker compose exec backend bash

# Connessione diretta al DB
docker compose exec db psql -U dev_user -d dev_db

# Riavvio singolo servizio
docker compose restart backend
If your browser does not resolve .localhost, add these lines to /etc/hosts:
# /etc/hosts — aggiungi se il browser non risolve .localhost 127.0.0.1 admin.localhost www.localhost litellm.localhost n8n.localhost

03

Production: Docker, TLS, hardening

The difference between a demo and a real product is here. No shortcuts: every point has security implications.

1. Production .env file

bash
# Genera prima con: openssl rand -hex 32
APP_SECRET_KEY=<openssl rand -hex 32>
APP_N8N_ENCRYPTION_KEY=<openssl rand -hex 32>

# PostgreSQL
POSTGRES_USER=saas_user
POSTGRES_PASSWORD=<password_forte>
POSTGRES_DB=saas_db
APP_DATABASE_URL=postgresql+asyncpg://saas_user:<password>@db:5432/saas_db

# Redis
APP_REDIS_URL=redis://redis:6379

# Email (Resend)
APP_RESEND_API_KEY=re_live_xxxxxxxxxx
APP_RESET_EMAIL_FROM=TuoSaaS <no-reply@tuodominio.com>
APP_BASE_URL=https://admin.tuodominio.com
APP_FRONTEND_BASE_URL=https://www.tuodominio.com

# Stripe LIVE
APP_STRIPE_SECRET_KEY=sk_live_xxxxxxxxxx
APP_STRIPE_PUBLISHABLE_KEY=pk_live_xxxxxxxxxx
APP_STRIPE_WEBHOOK_SECRET=whsec_xxxxxxxxxx
APP_STRIPE_PRICE_BASE=price_xxxxxxxxxx
APP_STRIPE_PRICE_PRO=price_xxxxxxxxxx
APP_STRIPE_PRICE_COMPANY=price_xxxxxxxxxx

# LiteLLM
APP_DEEPSEEK_API_KEY=sk-xxxxxxxxxx
APP_LITELLM_MASTER_KEY=<stringa_random_sicura>

# n8n
APP_N8N_ENCRYPTION_KEY=<openssl rand -hex 32>

# Frontend
NEXT_PUBLIC_API_BASE_URL=https://admin.tuodominio.com

# Backend produzione
APP_HOST=0.0.0.0
APP_PORT=8000
APP_RELOAD=false
APP_WORKERS=2

2. Traefik with HTTPS and Let's Encrypt

The TLS block is already present in compose.yaml but commented. For production, uncomment it and complete these steps:

yaml
command:
  - "--api.dashboard=true"
  - "--providers.docker=true"
  - "--providers.docker.exposedbydefault=false"
  - "--entrypoints.web.address=:80"
  - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
  - "--entrypoints.web.http.redirections.entryPoint.scheme=https"
  - "--entrypoints.websecure.address=:443"
  - "--certificatesresolvers.le.acme.httpchallenge=true"
  - "--certificatesresolvers.le.acme.httpchallenge.entrypoints=web"
  - "--certificatesresolvers.le.acme.email=tua@email.com"
  - "--certificatesresolvers.le.acme.storage=/acme.json"

ports:
  - "80:80"
  - "443:443"

volumes:
  - /var/run/docker.sock:/var/run/docker.sock:ro
  - ./traefik/acme.json:/acme.json
bash
mkdir -p traefik
touch traefik/acme.json
chmod 600 traefik/acme.json   # Traefik rifiuta il file se i permessi sono troppo aperti

# Basic auth per la dashboard (non esporre mai in chiaro)
echo $(htpasswd -nb admin <tua_password>) | sed -e s/\$/\$\$/g
# Output: admin:$$apr1$$xxxxx  ← da inserire nei labels Traefik

3. Labels for TLS and dashboard auth

yaml
# Labels backend in compose.yaml — TLS + auth dashboard
- "traefik.http.routers.backend-admin.entrypoints=websecure"
- "traefik.http.routers.backend-admin.tls.certresolver=le"
- "traefik.http.routers.dashboard.rule=Host(`traefik.tuodominio.com`)"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=admin:$$apr1$$xxxxx"
Never expose Traefik dashboard without authentication. In development it can be open on :8080 for convenience, but in production it must be protected or disabled.

04

CLI: admin modules without boilerplate

The CLI generates the full structure of a tenant-aware module. You focus on domain logic, not scaffolding.

What a single command generates

text
backend/app/
├── routes/admin/<nome>.py          ← Route FastAPI + Jinja2
├── templates/admin/<nome>/
│   └── index.html                   ← Template Tailwind base
├── models/<nome>.py                 ← (opzionale) SQLAlchemy model
├── schemas/<nome>.py                ← (opzionale) Pydantic schema
└── routes/admin/__init__.py         ← Aggiornato automaticamente

Real commands

bash
# Modulo base (tutti i ruoli autenticati del tenant)
docker compose exec backend python -m app.cli admin create-module clienti

# Modulo con accesso solo SUPERUTENTE + model + schema
docker compose exec backend python -m app.cli admin create-module ordini-vendite \
  --label "Ordini e Vendite" \
  --superuser-only \
  --with-model \
  --with-schema

# Lista moduli esistenti
docker compose exec backend python -m app.cli admin list-modules

Generated route (example with --superuser-only)

python
# routes/admin/ordini_vendite.py — generato dalla CLI
@router.get("/ordini_vendite", response_class=HTMLResponse)
async def ordini_vendite_page(
    request: Request,
    tenant_obj: Tenant = Depends(prendi_tenant_con_accesso),
    utente_corrente: Utente = Depends(prendi_utente_corrente),
    ruolo_corrente: str = Depends(prendi_ruolo_corrente),
    _: None = Depends(richiede_ruolo([UtenteRuolo.SUPERUTENTE])),  # guard RBAC
):
    return templates.TemplateResponse(
        request,
        "admin/ordini_vendite/index.html",
        {"tenant": tenant_obj, "utente": utente_corrente, "ruolo_corrente": ruolo_corrente},
    )

How to enrich with real DB logic

After generation, the pattern to add real queries is always the same: import the model, run a filtered select by tenant_id, then pass data to the template.

python
# Dopo la generazione, aggiungi la logica DB reale
from sqlalchemy import select
from app.core.database import get_db
from app.models.ordini_vendite import OrdiniVendite

@router.get("/ordini_vendite", response_class=HTMLResponse)
async def ordini_vendite_page(
    request: Request,
    tenant_obj: Tenant = Depends(prendi_tenant_con_accesso),
    utente_corrente: Utente = Depends(prendi_utente_corrente),
    ruolo_corrente: str = Depends(prendi_ruolo_corrente),
    _: None = Depends(richiede_ruolo([UtenteRuolo.SUPERUTENTE])),
    db: AsyncSession = Depends(get_db),
):
    result = await db.execute(
        select(OrdiniVendite).where(OrdiniVendite.tenant_id == tenant_obj.id)
    )
    ordini = result.scalars().all()
    return templates.TemplateResponse(
        request,
        "admin/ordini_vendite/index.html",
        {"tenant": tenant_obj, "utente": utente_corrente,
         "ruolo_corrente": ruolo_corrente, "ordini": ordini},
    )
The filter tenant_id == tenant_obj.id is the isolation boundary between tenants. Never skip it when querying tenant-aware tables.

05

Performance: measure before optimizing

The included login test simulates 700 concurrent users on the full CSRF flow. Use it as a baseline.

Install and first test

bash
# macOS
brew install k6

# Linux (Debian/Ubuntu)
sudo apt-get install k6

# Docker (nessuna installazione)
docker run --rm -i grafana/k6 run - <test/test_login.js
bash
# Test standard: 700 VU, 30 secondi (modifica credenziali prima)
k6 run test/test_login.js

# Test leggero per sviluppo
k6 run --vus 10 --duration 10s test/test_login.js

# Output JSON per analisi
k6 run --out json=result.json test/test_login.js
Before running the test, update credentials in test/test_login.js with a real user from your dev tenant. The test also simulates CSRF token extraction from the login page HTML.

Template to test any route

javascript
// test/test_custom.js — template per testare qualsiasi route
import http from 'k6/http'
import { check, sleep } from 'k6'

const BASE_URL = 'http://admin.localhost:8000'
const SESSION = 'id_sessione_utente=<cookie_reale>'

export const options = {
  stages: [
    { duration: '10s', target: 50 },    // ramp up
    { duration: '30s', target: 200 },   // carico sostenuto
    { duration: '10s', target: 0 },     // ramp down
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'],   // 95% richieste sotto 500ms
    http_req_failed: ['rate<0.01'],     // meno dell'1% di errori
  },
}

export default function () {
  const res = http.get(`${BASE_URL}/demo/dashboard`, {
    headers: { Cookie: SESSION },
  })
  check(res, {
    'status 200': r => r.status === 200,
    'sotto 200ms': r => r.timings.duration < 200,
    'html presente': r => r.headers['Content-Type'].includes('text/html'),
  })
  sleep(0.5)
}

How to read results

MetricGood thresholdWhat it measures
p(95) latency< 300ms95% richieste servite velocemente
p(99) latency< 1000msCoda lenta sotto controllo
http_req_failed< 1%Quasi zero errori HTTP
checks> 99%Logica applicativa corretta
http_reqs/sdipende dal casoThroughput complessivo

06

LiteLLM: policies, costs and GDPR

LiteLLM is the single control point for all LLM providers. Switch providers without touching n8n or backend.

Access the UI at http://litellm.localhost using APP_LITELLM_MASTER_KEY from your .env.

1. Provider configuration (litellm_config.yaml)

yaml
# litellm_config.yaml
model_list:
  # Provider principale: DeepSeek (economico, performante)
  - model_name: deepseek-chat
    litellm_params:
      model: deepseek/deepseek-chat
      api_key: os.environ/APP_DEEPSEEK_API_KEY

  - model_name: deepseek-reasoner
    litellm_params:
      model: deepseek/deepseek-reasoner
      api_key: os.environ/APP_DEEPSEEK_API_KEY

  # Provider EU per GDPR (Mistral AI — data center in Francia)
  - model_name: mistral-large
    litellm_params:
      model: mistral/mistral-large-latest
      api_key: os.environ/APP_MISTRAL_API_KEY

  # Fallback OpenAI opzionale
  - model_name: gpt-4o
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/APP_OPENAI_API_KEY

general_settings:
  master_key: os.environ/APP_LITELLM_MASTER_KEY

2. Tenant virtual keys with budget limits and GDPR

For tenants requiring EU data residency (e.g. GDPR), create a virtual key allowing only EU-hosted models:

bash
# Crea virtual key EU-only per tenant con data residency requirement
curl -X POST http://litellm.localhost/key/generate \
  -H "Authorization: Bearer $APP_LITELLM_MASTER_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "models": ["mistral-large"],
    "metadata": {"tenant_id": "azienda-eu"},
    "max_budget": 10.0,
    "budget_duration": "30d"
  }'

3. Spend monitoring

bash
# Vedi utilizzo e spesa per chiave
curl http://litellm.localhost/spend/keys \
  -H "Authorization: Bearer $APP_LITELLM_MASTER_KEY"

# Riavvia LiteLLM dopo modifiche al yaml (senza toccare lo stack)
docker compose restart litellm
After every change to litellm_config.yaml, run docker compose restart litellm. You do not need to restart the full stack.

07

n8n: setup and integration

n8n orchestrates automated workflows. TenantHawk remains the system of record for tenants, users and billing.

1. First access

Open http://n8n.localhost. On first start, n8n guides you through owner account creation and free community license activation at community.n8n.io.

Critical: the APP_N8N_ENCRYPTION_KEY in .env must be set before first startup. If you change it later, n8n cannot decrypt saved credentials and you must reset by deleting the volume.

2. Connect n8n to LiteLLM

text
# In n8n: Credentials → New → OpenAI API
API Key:  <APP_LITELLM_MASTER_KEY>
Base URL: http://litellm:4000
# Usa il nome servizio Docker, NON litellm.localhost
# (n8n è dentro la rete Docker e comunica via nome servizio)

3. Typical workflow: automatic tenant onboarding

text
Webhook trigger (es. nuovo tenant registrato)
  → HTTP Request → backend: GET /{slug}/utenti
  → OpenAI node (via LiteLLM) → genera email benvenuto
  → HTTP Request → backend: POST /api/email/send
  → Wait 3 giorni
  → HTTP Request → backend: GET /{slug}/sottoscrizione
  → If: trial scade in 2 giorni?
      → Sì: OpenAI → reminder upgrade → Resend → invia email

4. Workflow backup

bash
# Esporta tutti i workflow
docker compose exec n8n n8n export:workflow \
  --all \
  --output=/home/node/.n8n/backup.json

# Copia fuori dal container
docker cp $(docker compose ps -q n8n):/home/node/.n8n/backup.json ./backup/

Full reset (only if required)

bash
# Reset completo n8n (ATTENZIONE: cancella tutti i workflow)
docker compose down
docker volume rm saas_template_n8n_data
docker compose up -d

08

Billing: grace period and cascade delete

The tenant lifecycle is automated. No tenant is deleted without a recovery window.

The function applica_policy_disattivazione_tenant() is called at every tenant load and automatically handles state transitions:

python
# La policy di disattivazione tenant funziona così:
# 1. ATTIVO / PROVA → nessuna azione
# 2. SCADUTO / CANCELLATO → entra in SOSPESO per 14 giorni (grace period)
# 3. SOSPESO → dopo 14 giorni verifica live su Stripe, poi cascade delete
#
# Cascade delete rimuove: ruoli, token reset, utenti senza altri tenant, sottoscrizione, tenant
# Se l'utente è condiviso su altri tenant, il suo account viene semplicemente spostato
StatusAutomatic action
ACTIVE / TRIALNo action, access guaranteed
EXPIRED / CANCELEDMoves to SUSPENDED with a 14-day grace period
SUSPENDED (within grace)Access blocked, no deletion yet
SUSPENDED (grace expired)Live Stripe check -> cascade delete if confirmed
Before any destructive action, the system always performs a live Stripe verification. If the Stripe connection fails (verifica_live_ok = False), the policy switches to fail-safe mode and deletes nothing.

09

Web3: when and how to integrate it

TenantHawk is intentionally Web2-first for B2B reliability. Web3 is used only where it creates real competitive advantage.

Today identity, sessions and billing are centralized: cookies httpOnly, Redis session store and Stripe as payment authority. This reduces complexity and speeds time-to-market. Web3 makes sense in four concrete cases: tokenized access, on-chain billing, wallet-first login and immutable on-chain audit trail.

1. Dependencies

bash
# requirements.txt — aggiungi
siwe>=2.1.0
web3>=6.0.0

# frontend
npm install wagmi viem @rainbow-me/rainbowkit

2. Add wallet field to User model

python
# backend/app/models/utente.py — aggiungi il campo wallet
wallet_address: Mapped[str | None] = mapped_column(
    String(42),   # lunghezza fissa indirizzo Ethereum
    unique=True,
    index=True,
    nullable=True,
)

3. SIWE backend routes

python
# backend/app/routes/auth/web3.py
from siwe import SiweMessage

@router.post("/auth/web3/nonce")
async def genera_nonce(wallet_address: str):
    nonce = secrets.token_hex(16)
    await gestore_sessioni.redis.setex(
        f"web3_nonce:{wallet_address.lower()}",
        timedelta(minutes=5),
        nonce,
    )
    return {"nonce": nonce}

@router.post("/auth/web3/verify")
async def verifica_firma_web3(payload: dict, response: Response, db=Depends(get_db)):
    message = SiweMessage(message=payload["message"])
    message.verify(payload["signature"])
    wallet = message.address.lower()

    nonce_salvato = await gestore_sessioni.redis.get(f"web3_nonce:{wallet}")
    if nonce_salvato != message.nonce:
        raise HTTPException(status_code=401, detail="Nonce non valido o scaduto")

    utente = await trova_o_crea_utente_wallet(db, wallet_address=wallet)
    id_sessione = await gestore_sessioni.crea_sessione(
        id_utente=utente.id,
        id_tenant=utente.tenant_id,
        auth_method="web3",
        wallet=wallet,
    )
    # Sessione identica a quella classica: RBAC e tenancy funzionano senza refactor
    response.set_cookie(
        key="id_sessione_utente", value=id_sessione,
        httponly=True, samesite="lax",
    )
    return {"ok": True}

4. Frontend component with wagmi

typescript
// frontend — Next.js + wagmi
import { useSignMessage } from "wagmi"

export function Web3Login({ address }: { address: string }) {
  const { signMessageAsync } = useSignMessage()

  const handleLogin = async () => {
    const { nonce } = await fetch(
      '/api/auth/web3/nonce?wallet=' + address
    ).then(r => r.json())

    const message = buildSiweMessage({ address, nonce, domain: "admin.localhost" })
    const signature = await signMessageAsync({ message })

    await fetch("/api/auth/web3/verify", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ message, signature }),
    })
  }

  return <button onClick={handleLogin}>Accedi con Wallet</button>
}
Key point: the session created by the Web3 flow is structurally identical to classic login. All middleware around prendi_utente_corrente, RBAC and tenancy works without refactors because it uses only id_utente from Redis session data.

GO LIVE

Final launch checklist

Complete every item before going live. No step is optional.

  • APP_RELOAD=false and APP_WORKERS >= 2 in production
  • APP_SECRET_KEY generated with openssl rand -hex 32 (never use defaults)
  • APP_N8N_ENCRYPTION_KEY fixed before first n8n boot (immutable afterwards)
  • HTTPS enabled in compose.yaml and traefik/acme.json with chmod 600
  • Traefik dashboard protected with basic auth (never exposed in clear)
  • Stripe live keys and production webhook URL verified
  • LiteLLM virtual keys with tenant budget limits and EU models for GDPR
  • pg_data backups scheduled and restore procedure tested
  • Never run docker compose down -v in production (it wipes data)
  • Add /etc/hosts entries if .localhost does not resolve natively

You now have a complete operational baseline: architecture, development, production, automations, testing and go-live. The next step is turning this technical strength into positioning and commercial roadmap.