Scritorio’s managed AI path runs through a Cloudflare Worker instead of calling OpenAI directly from the desktop app. The desktop app still uses Supabase for account creation and login. Once the user is signed in, the desktop app sends the Supabase access token to Scritorio’s AI gateway. The gateway verifies that token, checks the user’s plan and usage in Supabase, calls OpenAI with Scritorio’s server-side key, logs usage back to Supabase, and returns a stable Scritorio response shape.

System Roles

Supabase is the backend database and account system:
  • account creation and login
  • Supabase Auth user identity
  • user profiles
  • billing customer, subscription, license, and entitlement records
  • managed AI plan and quota records
  • managed AI usage ledger
Cloudflare Workers are the public AI API layer:
  • HTTP endpoints for managed AI
  • Supabase JWT verification through the project JWKS endpoint
  • request validation
  • feature registry and prompt/model selection
  • quota enforcement
  • OpenAI Responses API calls
  • normalized app-facing responses
OpenAI is the first managed model provider. Its API key lives only in Worker secrets. Cloudflare AI Gateway may sit between the Worker and OpenAI when configured. Scritorio’s Worker remains the product gateway either way because it owns authentication, quotas, feature policy, and usage logging. Editorial Board chat can request Flex execution first and retry with Fast when Flex is unavailable. The desktop UI can show Reading with Flex..., a Flex success message, or a Flex-to-Fast retry message when the Appearance setting for routing detail is enabled. Hiding routing detail changes only the UI wording; it does not change the execution tier request or retry behavior.

Runtime Flow

Endpoints

The first Worker endpoints are:
POST /v1/ai/responses
GET /v1/ai/usage/me
The intended production base URL is:
https://api.scritorio.studio
POST /v1/ai/responses accepts a Scritorio feature request:
{
  "feature": "chapter_summary",
  "projectId": "local-project-id",
  "documentId": "chapter-1",
  "input": {
    "chapterTitle": "Chapter 1",
    "chapterText": "..."
  }
}
The app does not send raw OpenAI model names, tools, temperature, or provider request bodies. The Worker chooses those details from the server-side feature registry. For advisor chat, the app also keeps local debug and evidence surfaces separate from provider payloads. Prompt previews and waiting-bubble routing status are author-facing diagnostics, not additional provider instructions. The first supported features are:
  • chapter_summary
  • act_summary
The response is normalized:
{
  "id": "usage-event-or-request-id",
  "feature": "chapter_summary",
  "output": {
    "markdown": "## Chapter Summary\n..."
  },
  "usage": {
    "inputTokens": 100,
    "outputTokens": 20,
    "cachedInputTokens": 0,
    "estimatedCostUsd": 0
  }
}
GET /v1/ai/usage/me returns the signed-in user’s current managed AI period, cap, hard limit, estimated used amount, remaining amount, and token totals.

Database Tables

Managed AI uses two Supabase tables:
  • ai_user_plans
  • ai_usage_events
ai_user_plans stores the user’s current managed AI plan, status, included monthly cap, hard limit, and usage period. ai_usage_events is an append-only usage ledger for managed AI calls. It stores user id, local project/document identifiers, feature id, provider, model, OpenAI request id when available, token counts, estimated cost, status, and error code. Both tables have row level security enabled. Authenticated users can read only their own rows. The Worker writes rows with the Supabase service role key, which must never be shipped to the desktop app. For development, the Worker creates a dev_managed_ai plan for an authenticated user when the user first calls managed AI or requests usage.

Worker Configuration

The desktop app needs only public values:
PUBLIC_SUPABASE_URL="https://pyglmeehohtpjimgiuyz.supabase.co"
PUBLIC_SUPABASE_PUBLISHABLE_KEY="..."
PUBLIC_SCRITORIO_AI_GATEWAY_URL="https://..."
The Worker needs secrets:
OPENAI_API_KEY="..."
SUPABASE_URL="https://pyglmeehohtpjimgiuyz.supabase.co"
SUPABASE_SERVICE_ROLE_KEY="..."
SUPABASE_JWKS_URL="https://pyglmeehohtpjimgiuyz.supabase.co/auth/v1/.well-known/jwks.json"
Optional Cloudflare AI Gateway routing:
CLOUDFLARE_AI_GATEWAY_BASE_URL="https://gateway.ai.cloudflare.com/v1/<account-id>/<gateway-id>/openai"
CLOUDFLARE_AI_GATEWAY_TOKEN="..."

Security Rules

  • The desktop app never receives OPENAI_API_KEY.
  • The desktop app never receives SUPABASE_SERVICE_ROLE_KEY.
  • The Worker never trusts a client-provided user_id.
  • User identity is derived from the verified Supabase JWT sub.
  • The client requests Scritorio features, not arbitrary provider settings.
  • Manuscript text is not stored in the usage ledger.
  • Usage rows store metadata and token/cost accounting, not full prompts.

Current Implementation

The first implementation lives in:
apps/ai-gateway/
apps/desktop/src/lib/account/managed-ai.ts
supabase/migrations/20260504143000_managed_ai_gateway.sql
The root scripts include:
bun run check:ai-gateway
bun run test:ai-gateway