This is the full developer documentation for Zeroback
# Zeroback
> Open-source real-time backend for Cloudflare. Type-safe functions, reactive queries, your infrastructure.

Developer Experience
Schema to screen\
in three steps
--------------
01
Define your schema Declare your tables, fields, and indexes in TypeScript. Zeroback generates types and SQL from your schema automatically.
02
Write functions Queries, mutations, and actions are plain TypeScript functions. They run on Cloudflare with full type safety and transactional guarantees.
03
Use from React Bind your UI to live data with a single hook. Queries re-render automatically when the underlying data changes — no polling or refetching.
zeroback/schema.ts
```
import { defineSchema, defineTable, v } from "@zeroback/server"
export const schema = defineSchema({
messages: defineTable({
channel: v.string(),
author: v.string(),
body: v.string(),
}).index("by_channel", ["channel"]),
})
```
zeroback/messages.ts
```
import { query, v } from "./_generated/server"
export const list = query({
args: { channel: v.string() },
handler: async (ctx, args) => {
return await ctx.db
.query("messages")
.withIndex("by_channel",
(q) => q.eq("channel", args.channel))
.collect()
},
})
```
src/Chat.tsx
```
import { useQuery, useMutation } from "@zeroback/react"
import { api } from "../zeroback/_generated/api"
function Chat({ channel }: { channel: string }) {
const messages = useQuery(api.messages.list,
{ channel }
)
const send = useMutation(api.messages.send)
// Auto-updates when anyone sends a message
// No polling. No refetch. No boilerplate.
}
```
Why Zeroback
Everything you need.\
Nothing in the way.
-------------------
⚡
### Real-Time Queries
Queries auto-subscribe over WebSocket. When data changes, affected clients update instantly.
🔒
### Type-Safe End-to-End
Schema and functions generate a typed API object. From database to React hook — zero gaps, zero guesswork.
🏗️
### Your Infrastructure
Runs on your Cloudflare account. Workers + Durable Objects + SQLite. Your data never leaves your control.
📡
### Offline-First
IndexedDB persistence. Your app renders from cache instantly, even before the WebSocket connects.
🧩
### Batteries Included
File storage, scheduled jobs, cron, full-text search, pagination. Everything you need, nothing you don't.
⚙️
### One Command Setup
npx @zeroback/cli init scaffolds your project. zeroback dev starts the server. You're building in under two minutes.
Architecture
One Durable Object.\
Everything you need.
--------------------
A single Durable Object gives you SQLite for persistence, WebSockets for real-time, and strong consistency by default. No distributed coordination. No cache invalidation.
React Client useQuery · useMutation
WebSocket
Worker Auth · CORS · Routing
Internal
Durable Object SQLite · Subscriptions · OCC
< 50ms Query latency
\~10 GB SQLite storage
1,000 Concurrent connections
MIT Open source
## Start building in two minutes
`npx @zeroback/cli init my-app`
[Read the docs](/getting-started/) [View on GitHub](https://github.com/zerodeploy-dev/zeroback)
# Authentication
> Bring your own authentication to Zeroback. Verify users in your Worker before they reach the Durable Object.
Zeroback does not (yet) include a built-in auth system. Instead, you **bring your own authentication** — verify users in your Worker’s `fetch()` handler before forwarding requests to the Durable Object.
Built-in auth is coming
We’re working on adding built-in authentication to Zeroback — session management, OAuth, and magic links, all running inside your Durable Object with zero external dependencies. Until then, the BYOA pattern described here is the recommended approach and will continue to be supported.
This keeps auth decoupled from the framework: use any provider (Clerk, Auth0, Lucia, WorkOS, cookie sessions, JWTs) and any verification strategy. Zeroback doesn’t care how you authenticate — it only needs the Worker to gate access.
## How It Works
[Section titled “How It Works”](#how-it-works)
```plaintext
┌────────────┐ ┌───────────────────┐ ┌────────────────┐
│ │ cookie/JWT │ │ forward │ │
│ Browser │ ─────────────→ │ Worker fetch() │ ─────────────→ │ ZerobackDO │
│ │ │ verify auth │ request │ (trusted) │
│ │ │ reject or forward│ │ │
└────────────┘ └───────────────────┘ └────────────────┘
```
The Worker acts as a gateway. Unauthenticated requests get a `401` and never reach the Durable Object. Authenticated requests are forwarded as-is.
## Setup
[Section titled “Setup”](#setup)
### 1. Write a verify function
[Section titled “1. Write a verify function”](#1-write-a-verify-function)
Create an `auth.ts` that verifies the incoming request. This example validates a session cookie against an external API via a [service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/):
auth.ts
```typescript
import type { Env } from "./index"
export type AuthResult =
| { ok: true; userId: string }
| { ok: false; error: string }
export async function verifyAuth(
request: Request,
env: Env
): Promise {
const cookie = request.headers.get("Cookie")
if (!cookie) {
return { ok: false, error: "Not authenticated" }
}
try {
// Call your auth service — service binding, JWT verify, etc.
const res = await env.AUTH_API.fetch(
new Request("https://auth.example.com/me", {
headers: { Cookie: cookie },
})
)
if (!res.ok) {
return { ok: false, error: "Not authenticated" }
}
const data = (await res.json()) as { id: string }
return { ok: true, userId: data.id }
} catch {
return { ok: false, error: "Auth service unavailable" }
}
}
```
**Other verification strategies:**
* **JWT**: Use `jose` or `@clerk/backend` to verify a JWT from the `Authorization` header. No external call needed.
* **Session cookie**: Look up a session token in KV or D1.
* **Service binding**: Forward the cookie to another Worker that owns auth (zero network roundtrip — shown above).
### 2. Gate requests in your Worker
[Section titled “2. Gate requests in your Worker”](#2-gate-requests-in-your-worker)
Your Worker’s `fetch()` handler verifies auth before forwarding to the Durable Object:
index.ts
```typescript
import { createZerobackDO } from "@zeroback/server/runtime"
import { functions, schema, httpRouter, cronJobsDef } from "./zeroback/_generated/manifest"
import { verifyAuth } from "./auth"
export const ZerobackDO = createZerobackDO({ functions, schema, httpRouter, cronJobsDef })
export interface Env {
ZEROBACK_DO: DurableObjectNamespace
AUTH_API: Fetcher // service binding to your auth worker
}
function getDOStub(env: Env): DurableObjectStub {
const doId = env.ZEROBACK_DO.idFromName("default")
return env.ZEROBACK_DO.get(doId)
}
export default {
async fetch(request: Request, env: Env): Promise {
const url = new URL(request.url)
// Public routes (no auth)
if (url.pathname === "/health") {
return new Response("ok")
}
// Verify auth
const auth = await verifyAuth(request, env)
if (!auth.ok) {
return new Response(
JSON.stringify({ error: auth.error }),
{ status: 401, headers: { "Content-Type": "application/json" } }
)
}
// Forward to Zeroback DO
const doStub = getDOStub(env)
return doStub.fetch(request)
},
}
```
This applies to **all** Zeroback traffic — WebSocket connections (`/ws`), the SSR query endpoint (`POST /query`), HTTP actions, and function calls. If the auth check fails, the request never reaches the Durable Object.
SSR and `preloadQuery`
`preloadQuery` calls `POST /query` from your server-side loader. Because the request originates on the server (not from a browser), it won’t carry the user’s cookies or tokens automatically. To authenticate SSR queries, forward a token from the loader context:
```ts
export const loader = createServerFn().handler(async ({ context }) => {
const preloaded = await preloadQuery(
ZEROBACK_URL,
api.tasks.list,
{},
{ headers: { Authorization: `Bearer ${context.token}` } } // forward auth
)
return { preloaded }
})
```
Your Worker’s `verifyAuth` function will then receive and validate that token before forwarding to the DO.
Note: the optional `headers` parameter for `preloadQuery` is not yet implemented — this is a preview of the planned API. For now, `preloadQuery` is unauthenticated, matching the current WebSocket transport.
### 3. Add CORS (if your frontend is on a different origin)
[Section titled “3. Add CORS (if your frontend is on a different origin)”](#3-add-cors-if-your-frontend-is-on-a-different-origin)
If your frontend and backend are on different origins, add CORS headers:
```typescript
function corsHeaders(origin: string): Record {
return {
"Access-Control-Allow-Origin": origin,
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
"Access-Control-Allow-Credentials": "true",
}
}
export default {
async fetch(request: Request, env: Env): Promise {
const origin = request.headers.get("Origin") ?? ""
// CORS preflight
if (request.method === "OPTIONS") {
return new Response(null, { status: 204, headers: corsHeaders(origin) })
}
// ... auth check ...
const response = await doStub.fetch(request)
// Add CORS headers to response
const newResponse = new Response(response.body, response)
for (const [key, value] of Object.entries(corsHeaders(origin))) {
newResponse.headers.set(key, value)
}
return newResponse
},
}
```
**Important:** Set `Access-Control-Allow-Credentials: "true"` so cookies are sent with cross-origin requests. Your frontend must also set `credentials: "include"` on fetch calls (the Zeroback client does this automatically for WebSocket connections).
## JWT Verification Example
[Section titled “JWT Verification Example”](#jwt-verification-example)
If your auth provider issues JWTs (Clerk, Auth0, Supabase), you can verify them directly in the Worker without an external call:
auth.ts
```typescript
import { jwtVerify, createRemoteJWKSet } from "jose"
const JWKS = createRemoteJWKSet(
new URL("https://your-app.clerk.accounts.dev/.well-known/jwks.json")
)
export async function verifyAuth(request: Request): Promise {
const token = request.headers.get("Authorization")?.replace("Bearer ", "")
if (!token) {
return { ok: false, error: "No token" }
}
try {
const { payload } = await jwtVerify(token, JWKS, {
issuer: "https://your-app.clerk.accounts.dev",
audience: "your-app",
})
return { ok: true, userId: payload.sub! }
} catch {
return { ok: false, error: "Invalid token" }
}
}
```
## Multi-Tenant Routing
[Section titled “Multi-Tenant Routing”](#multi-tenant-routing)
For multi-tenant apps, you can use the authenticated user to route to different Durable Objects:
```typescript
function getDOStub(env: Env, tenantId: string): DurableObjectStub {
const doId = env.ZEROBACK_DO.idFromName(tenantId)
return env.ZEROBACK_DO.get(doId)
}
export default {
async fetch(request: Request, env: Env): Promise {
const auth = await verifyAuth(request, env)
if (!auth.ok) {
return new Response("Unauthorized", { status: 401 })
}
// Route to tenant-specific DO
const doStub = getDOStub(env, auth.tenantId)
return doStub.fetch(request)
},
}
```
Each tenant gets its own Durable Object with isolated SQLite storage. Auth determines which DO handles the request.
## Summary
[Section titled “Summary”](#summary)
| Concern | Where it lives |
| -------------------------------- | ----------------------------------- |
| Authentication (who are you?) | Worker `fetch()` — your code |
| Transport (WebSocket, HTTP) | Zeroback runtime |
| Authorization (can you do this?) | Your Zeroback functions |
| Data isolation | DO routing (single or multi-tenant) |
The Worker is the security boundary. Everything behind it — queries, mutations, subscriptions — is trusted internal traffic.
# Why Zeroback
> I built Zeroback — an open-source, self-hosted Convex alternative running on Cloudflare Durable Objects. Real-time queries, type-safe codegen, your infrastructure.
*March 27, 2026* · By [Ran Yefet](https://x.com/ranyefet)
Cloudflare has everything you need to build a backend. Workers for compute. D1 for SQL. R2 for storage. Durable Objects for stateful coordination. I built an entire [hosting platform](https://zerodeploy.dev) on Cloudflare without a single external service.
But when I needed a real-time backend — database, subscriptions, type-safe functions — there was nothing that tied it all together. The infrastructure exists. The developer experience doesn’t.
If you want a backend today, you leave the ecosystem. Bolt on Supabase. Add Firebase. Your frontend runs on Cloudflare, your backend runs somewhere else. Two vendors, two bills, your data in someone else’s cloud.
I wanted an open-source, self-hosted real-time backend for Cloudflare. So I built one.
## The first attempt
[Section titled “The first attempt”](#the-first-attempt)
I knew [Convex](https://convex.dev). I loved the developer experience — reactive queries, type-safe functions, real-time by default. But my first attempt at a Cloudflare backend wasn’t Convex-inspired at all. It was Supabase-inspired.
I built a PostgREST-style API layer for D1 — Cloudflare’s SQLite database. REST endpoints, query parameters for filtering, the whole thing. It worked. You could CRUD data, do joins, run aggregates. Similar goals to what Zeroback is today: database, real-time, storage, auth — all on Cloudflare.
But it didn’t feel right.
The REST API was limited compared to writing actual query functions. Type safety was a constant concern — query params aren’t typed, and the gap between what you write and what the database returns was too wide. And real-time was the real problem. D1 is a serverless database — there’s no persistent connection, no way to push changes to clients. Bolting real-time onto D1 felt like fighting the architecture.
I shelved it. But the idea didn’t go away.
## The Durable Objects discovery
[Section titled “The Durable Objects discovery”](#the-durable-objects-discovery)
Then I found something I’d overlooked. Durable Objects come with built-in SQLite storage — the same SQLite that D1 is built on. And they support WebSockets natively.
That’s when it clicked.
A single Durable Object gives you a long-lived, single-threaded process with SQLite for persistence and WebSocket connections to every client. Queries, mutations, and subscriptions — all in one place. No distributed coordination. No cache invalidation across nodes. Strong consistency by default. And real-time isn’t bolted on — it’s the native model. The data and the connections live together.
I wasn’t sure it was possible at first. Could you build a real-time database with Convex’s developer experience on top of a single Durable Object? Would the performance hold up? Would the programming model work?
I started experimenting. And it worked.
## What Zeroback is
[Section titled “What Zeroback is”](#what-zeroback-is)
Zeroback is an open-source Convex alternative that runs on Cloudflare Durable Objects. You define a schema:
```typescript
import { defineSchema, defineTable, v } from "@zeroback/server";
export const schema = defineSchema({
messages: defineTable({
channel: v.string(),
author: v.string(),
body: v.string(),
}).index("by_channel", ["channel"]),
});
```
You write functions — plain TypeScript, fully typed:
```typescript
export const list = query({
args: { channel: v.string() },
handler: async (ctx, args) => {
return await ctx.db
.query("messages")
.withIndex("by_channel", (q) => q.eq("channel", args.channel))
.collect();
},
});
export const send = mutation({
args: { channel: v.string(), author: v.string(), body: v.string() },
handler: async (ctx, args) => {
await ctx.db.insert("messages", args);
},
});
```
You call them from React:
```tsx
function Chat({ channel }) {
const messages = useQuery(api.messages.list, { channel });
const sendMessage = useMutation(api.messages.send);
// messages auto-update when anyone sends a new message.
// no refetch, no polling, no WebSocket boilerplate.
}
```
The client connects over WebSocket. Queries auto-subscribe. When a mutation changes data, Zeroback figures out which queries are affected and pushes updated results to the right clients.
Not table-level invalidation — query-level. Posting to `#random` doesn’t trigger re-execution of subscriptions watching `#general`. That was the hardest problem to solve, and the one I’m most proud of.
Optimistic concurrency control handles conflicts automatically. If two mutations touch the same data, the second one retries. The client never sees a conflict. IndexedDB persistence means your app renders instantly from cache, even before the WebSocket connects.
## What’s real today
[Section titled “What’s real today”](#whats-real-today)
This isn’t a prototype. It’s 7 packages, a CLI, React and Solid.js bindings, and over 300 tests:
* **Real-time subscriptions** with query-level invalidation and diff suppression
* **Type-safe codegen** — schema and functions generate a typed API object, end-to-end from database to UI
* **Indexed queries**, compound indexes, full-text search, cursor-based pagination
* **File storage**, scheduled jobs, cron — backed by R2 and Durable Object alarms
* **Offline support** — IndexedDB persistence for instant renders and offline reads
I’m using it in production. I built an email client on Zeroback — real-time inbox with threads, labels, attachments, AI-powered classification. When an email arrives, every connected client sees it instantly. The entire backend is a single Durable Object.
## What’s honest
[Section titled “What’s honest”](#whats-honest)
Zeroback runs on a single Durable Object. That means real limits:
* **\~10 GB storage** (SQLite in a DO)
* **\~1,000 concurrent connections** (self-imposed, to keep latency predictable)
* **Single-threaded execution**
This is not a database for the next billion-user app. It’s a backend for apps where a team, a project, or a tenant needs real-time data with great DX and full control. Side projects, internal tools, SaaS per-tenant backends, collaborative apps, real-time dashboards.
If you need Postgres features, multi-region, or unlimited scale — use Supabase or Convex. They’re more mature and feature-complete. I’m not pretending otherwise.
But if you’re on Cloudflare and you’ve been waiting for a backend that feels right — or if you want a self-hosted Convex alternative on your own infrastructure — that’s what Zeroback is for.
## What’s next
[Section titled “What’s next”](#whats-next)
Authentication is the biggest missing piece. Right now you bring your own (Clerk, Auth0, etc.). Built-in auth — email/password, OAuth, magic links — is the top priority. I’ll build it when users confirm they need it, not before.
The code is open source, MIT licensed, and on [GitHub](https://github.com/zerodeploy-dev/zeroback).
## How I built it
[Section titled “How I built it”](#how-i-built-it)
I built Zeroback in a week. Entirely with Claude Code.
I know how that sounds. A week for 7 packages, a CLI, real-time subscriptions, codegen, optimistic concurrency, offline support, React hooks, Solid bindings, 300+ tests.
It’s true, and it’s the thing that made me take the leap from engineering manager back to builder. I’ve been managing for four years. Good at it. But I missed building. The kind of building where you close your laptop at midnight and can’t wait to open it again the next day.
LLMs didn’t give me the vision for Zeroback — I’d been thinking about real-time backends and Cloudflare’s potential for a long time. What they gave me is leverage. The kind of leverage that lets a solo founder with a full-time job and two kids build at a pace that used to require a team.
I’m not going to pretend Claude Code wrote perfect code on the first try. It didn’t. I designed the architecture, made every trade-off decision, and debugged the hard problems. I tested everything — over 300 tests across the codebase. But the velocity is real. And it changes what’s possible for one person working after hours.
`npx @zeroback/cli init` and you’ll have a working app in under two minutes.
I’m building this in public. If you’re on Cloudflare and you’ve been waiting for a backend that feels right — [give it a try](https://github.com/zerodeploy-dev/zeroback). And if you want to follow the journey: [@ranyefet on X](https://x.com/ranyefet).
# CLI
> Scaffold, develop, deploy, and test your Zeroback backend from the terminal.
The `zeroback` CLI manages development, code generation, and deployment of your Zeroback application.
## Commands
[Section titled “Commands”](#commands)
### `zeroback init [dir]`
[Section titled “zeroback init \[dir\]”](#zeroback-init-dir)
Scaffold a new Zeroback project.
```plaintext
zeroback init [dir]
```
| Argument | Default | Description |
| -------- | ------- | ----------------- |
| `dir` | `"."` | Project directory |
**Creates:**
| File | Description |
| ------------------------------- | ------------------------------------------------------------ |
| `zeroback/schema.ts` | Starter schema with a `tasks` table |
| `zeroback/tasks.ts` | Example query and mutation functions |
| `zeroback/_generated/server.ts` | Stub file so imports resolve before first codegen |
| `wrangler.toml` | Cloudflare Workers configuration (if not present) |
| `.zeroback/entry.ts` | Worker entry point — imports manifest and wires to runtime |
| `.gitignore` | Ignores `.zeroback/*` except `entry.ts` (creates or appends) |
Skips scaffolding if the `zeroback/` directory already exists.
**Example:**
```bash
mkdir my-app && cd my-app
npm init -y
npx @zeroback/cli init
```
### `zeroback dev [functionsDir]`
[Section titled “zeroback dev \[functionsDir\]”](#zeroback-dev-functionsdir)
Start the development server with hot reload.
```plaintext
zeroback dev [functionsDir]
```
| Argument | Default | Description |
| -------------- | -------------- | -------------------------------- |
| `functionsDir` | `"./zeroback"` | Path to your functions directory |
**Behavior:**
1. Scaffolds `.zeroback/entry.ts` if missing (worker entry point)
2. Analyzes schema and functions, generates types and `_generated/manifest.ts`
3. Starts Wrangler dev server on **port 8788**
4. Watches `zeroback/` for changes (ignoring `_generated/` and `node_modules/`)
5. On file changes: re-analyzes, re-generates manifest
**Generated files:**
| File | Description |
| ---------------------------------- | ------------------------------------------------------ |
| `zeroback/_generated/api.ts` | Typed function references (`api.tasks.create`, etc.) |
| `zeroback/_generated/server.ts` | Typed function factories bound to your `DataModel` |
| `zeroback/_generated/dataModel.ts` | Standalone `DataModel` type |
| `zeroback/_generated/manifest.ts` | Function registrations, schema, HTTP router, cron jobs |
**Example:**
```bash
npx @zeroback/cli dev
# or with a custom functions directory
npx @zeroback/cli dev ./src/zeroback
```
### `zeroback deploy [functionsDir] [--dry-run] [-- wranglerArgs...]`
[Section titled “zeroback deploy \[functionsDir\] \[--dry-run\] \[-- wranglerArgs...\]”](#zeroback-deploy-functionsdir---dry-run----wranglerargs)
Build and deploy to Cloudflare.
```plaintext
zeroback deploy [functionsDir] [--dry-run] [-- wranglerArgs...]
```
| Argument | Default | Description |
| -------------- | -------------- | --------------------------------------------------- |
| `functionsDir` | `"./zeroback"` | Path to your functions directory |
| `--dry-run` | `false` | Run codegen only, skip wrangler deploy |
| `-- args...` | — | Extra arguments passed through to `wrangler deploy` |
**Prerequisites:**
You must be authenticated with Cloudflare before deploying. Either:
* Run `npx wrangler login` to log in via your browser (recommended for local development)
* Set the `CLOUDFLARE_API_TOKEN` environment variable (recommended for CI/CD)
The deploy command will check authentication before deploying and provide guidance if you’re not logged in.
**Behavior:**
1. Runs codegen (same as `zeroback dev` build step)
2. If `--dry-run`: stops after codegen
3. Verifies Cloudflare authentication
4. Runs `wrangler deploy` with any extra arguments
Requires `wrangler.toml` at the project root.
**Examples:**
```bash
# First-time setup: log in to Cloudflare
npx wrangler login
# Deploy
npx @zeroback/cli deploy
# Dry run (codegen only)
npx @zeroback/cli deploy --dry-run
# Pass args to wrangler
npx @zeroback/cli deploy -- --env production
# CI/CD: use an API token instead of interactive login
CLOUDFLARE_API_TOKEN=your-token npx @zeroback/cli deploy
```
### `zeroback codegen [functionsDir]`
[Section titled “zeroback codegen \[functionsDir\]”](#zeroback-codegen-functionsdir)
Run code generation without starting a dev server.
```plaintext
zeroback codegen [functionsDir]
```
| Argument | Default | Description |
| -------------- | -------------- | -------------------------------- |
| `functionsDir` | `"./zeroback"` | Path to your functions directory |
Runs the same build step as `zeroback dev` (analyze, codegen, bundle) but exits immediately. Useful for CI or pre-commit hooks.
```bash
npx @zeroback/cli codegen
```
### `zeroback reset`
[Section titled “zeroback reset”](#zeroback-reset)
Reset the local development database.
```plaintext
zeroback reset
```
Deletes the `.wrangler/state` directory, which contains all local Durable Object and SQLite data. Restart `zeroback dev` afterwards to start with a fresh database.
```bash
npx @zeroback/cli reset
```
### `zeroback run [jsonArgs] [--url ]`
[Section titled “zeroback run \ \[jsonArgs\] \[--url \\]”](#zeroback-run-functionname-jsonargs---url-url)
Invoke a function (query, mutation, or action) on the running dev server.
```plaintext
zeroback run [jsonArgs] [--url ]
```
| Argument | Default | Description |
| -------------- | ----------------------- | ----------------------------------- |
| `functionName` | *(required)* | Function to call, e.g. `tasks:list` |
| `jsonArgs` | `{}` | JSON object of arguments |
| `--url` | `http://localhost:8788` | URL of the Zeroback server |
**Behavior:**
1. Sends a POST request to the server’s `/__admin/run` endpoint
2. Executes the function and prints the JSON result to stdout
3. Both public and internal functions can be called (useful for debugging)
**Examples:**
```bash
# Run a query
npx @zeroback/cli run tasks:list
# Run a mutation with arguments
npx @zeroback/cli run tasks:create '{"title": "Buy groceries", "projectId": "proj:abc", "status": "todo"}'
# Run an internal function
npx @zeroback/cli run tasks:countInternal '{"projectId": "proj:abc"}'
# Target a deployed server
npx @zeroback/cli run tasks:list --url https://my-worker.example.com
```
## Project Structure
[Section titled “Project Structure”](#project-structure)
After running `zeroback init` and `zeroback dev`, your project looks like:
```plaintext
my-app/
zeroback/
schema.ts # Your schema definition
tasks.ts # Your function files
_generated/
api.ts # Generated: typed function references
server.ts # Generated: typed factories + DataModel
dataModel.ts # Generated: DataModel type
.zeroback/
entry.ts # Scaffolded by init, user-owned — imports manifest + wires to runtime
wrangler.toml # Cloudflare Workers configuration
```
**Key conventions:**
* Function files go in `zeroback/` (any `.ts` file except `schema.ts` and files starting with `_`)
* Nested directories are supported: `zeroback/utils/stats.ts` produces function names like `"utils/stats:functionName"`
* Schema is always `zeroback/schema.ts`
* Never edit files in `zeroback/_generated/` — they are overwritten on every build
* `.zeroback/entry.ts` is user-owned and can be customized (e.g. to add middleware or env bindings)
# Client
> Type-safe WebSocket client with auto-reconnect, optimistic updates, and offline persistence.
The `@zeroback/client` package provides `ZerobackClient` — a WebSocket-based client for connecting to your Zeroback backend from the browser or any JavaScript environment.
## Installation
[Section titled “Installation”](#installation)
```bash
npm install @zeroback/client
```
## ZerobackClient
[Section titled “ZerobackClient”](#zerobackclient)
### Constructor
[Section titled “Constructor”](#constructor)
```ts
import { ZerobackClient } from "@zeroback/client";
const client = new ZerobackClient(url: string, options?: ZerobackClientOptions);
```
| Parameter | Type | Description |
| --------- | ----------------------- | -------------------------------------- |
| `url` | `string` | WebSocket URL of your Zeroback backend |
| `options` | `ZerobackClientOptions` | Optional configuration |
### `ZerobackClientOptions`
[Section titled “ZerobackClientOptions”](#zerobackclientoptions)
```ts
interface ZerobackClientOptions {
persistence?: boolean | PersistenceAdapter;
maxCacheAge?: number;
schemaVersion?: string;
backoff?: BackoffOptions;
heartbeatIntervalMs?: number;
requestTimeoutMs?: number;
}
```
| Field | Type | Default | Description |
| --------------------- | ------------------------------- | ---------------------- | -------------------------------------------------------------------------------------------------- |
| `persistence` | `boolean \| PersistenceAdapter` | `undefined` (disabled) | Enable IndexedDB caching. Pass `true` for the built-in adapter, or a custom `PersistenceAdapter`. |
| `maxCacheAge` | `number` | `604800000` (7 days) | Maximum cache age in milliseconds. Entries older than this are discarded on hydration. |
| `schemaVersion` | `string` | `undefined` | When changed, the entire cache is cleared. Use this to invalidate stale data after schema changes. |
| `backoff` | `BackoffOptions` | See below | Configure reconnection backoff behavior. |
| `heartbeatIntervalMs` | `number` | `30000` (30s) | How often the client sends a ping to keep the connection alive. |
| `requestTimeoutMs` | `number` | `60000` (60s) | How long to wait for a mutation/action response before timing out. |
#### `BackoffOptions`
[Section titled “BackoffOptions”](#backoffoptions)
```ts
interface BackoffOptions {
baseMs?: number; // Default: 1000
maxMs?: number; // Default: 30000
maxAttempts?: number; // Default: 5
}
```
When persistence is **disabled** (default), the client connects immediately on construction. When persistence is **enabled**, you must call `client.init()` before using the client.
## Methods
[Section titled “Methods”](#methods)
### `client.init()`
[Section titled “client.init()”](#clientinit)
Initialize the client with persistence. Hydrates cached data from IndexedDB, connects the WebSocket, and replays any offline mutations.
```ts
await client.init(): Promise
```
**Only needed when `persistence` is enabled.** Without persistence, the client connects automatically on construction.
```ts
const client = new ZerobackClient(url, { persistence: true });
await client.init(); // hydrate cache, connect, replay offline mutations
```
### `client.subscribe(fnName, args, callback?)`
[Section titled “client.subscribe(fnName, args, callback?)”](#clientsubscribefnname-args-callback)
Subscribe to a query. The server pushes updates whenever the query result changes.
```ts
client.subscribe(
fnName: string,
args: unknown,
callback?: (data: unknown) => void
): () => void
```
| Parameter | Type | Description |
| ---------- | ------------------------- | --------------------------------------------- |
| `fnName` | `string` | Function name (e.g., `"tasks:listByProject"`) |
| `args` | `unknown` | Arguments to pass to the query |
| `callback` | `(data: unknown) => void` | Optional callback invoked on every update |
**Returns:** An unsubscribe function. Call it to stop the subscription.
```ts
const unsubscribe = client.subscribe("tasks:listByProject", { projectId: "proj123" });
// ... later
unsubscribe();
```
### `client.watchQuery(key, listener)`
[Section titled “client.watchQuery(key, listener)”](#clientwatchquerykey-listener)
Watch for changes to a specific query key in the centralized store. Used internally by React hooks via `useSyncExternalStore`.
```ts
client.watchQuery(key: QueryKey, listener: () => void): () => void
```
| Parameter | Type | Description |
| ---------- | ------------ | ------------------------------------- |
| `key` | `QueryKey` | Query key from `QueryStore.makeKey()` |
| `listener` | `() => void` | Called when the query result changes |
**Returns:** An unsubscribe function.
### `client.getQueryResult(key)`
[Section titled “client.getQueryResult(key)”](#clientgetqueryresultkey)
Get the current result for a query key (merged base + optimistic update layers).
```ts
client.getQueryResult(key: QueryKey): unknown | undefined
```
Returns `undefined` if no result is available yet.
### `client.hasServerResult(key)`
[Section titled “client.hasServerResult(key)”](#clienthasserverresultkey)
Whether this query key has been confirmed by the server (not just loaded from the persistence cache).
```ts
client.hasServerResult(key: QueryKey): boolean
```
### `client.mutation(fnName, args, opts?)`
[Section titled “client.mutation(fnName, args, opts?)”](#clientmutationfnname-args-opts)
Execute a mutation. Mutations are queued and execute sequentially in order.
```ts
client.mutation(
fnName: string,
args: unknown,
opts?: { optimisticUpdate?: (store: LocalStore) => void }
): Promise
```
| Parameter | Type | Description |
| ----------------------- | ----------------------------- | ----------------------------------------------------------- |
| `fnName` | `string` | Mutation function name |
| `args` | `unknown` | Arguments to pass |
| `opts.optimisticUpdate` | `(store: LocalStore) => void` | Optional callback to modify local query results immediately |
**Returns:** The mutation’s return value.
#### Optimistic Updates
[Section titled “Optimistic Updates”](#optimistic-updates)
Optimistic updates modify local query results immediately, before the server confirms the mutation. The optimistic layer is removed once the server responds.
```ts
await client.mutation("tasks:create", { title: "New task", ... }, {
optimisticUpdate: (store) => {
const current = store.getQuery("tasks:listByProject", { projectId: "proj123" });
if (Array.isArray(current)) {
store.setQuery("tasks:listByProject", { projectId: "proj123" }, [
{ title: "New task", _id: "temp", _creationTime: Date.now() },
...current,
]);
}
},
});
```
#### `LocalStore`
[Section titled “LocalStore”](#localstore)
```ts
interface LocalStore {
getQuery(ref: { _name: string } | string, args?: unknown): unknown | undefined;
setQuery(ref: { _name: string } | string, args: unknown, value: unknown): void;
}
```
| Method | Description |
| ---------------------------- | ---------------------------------------------------------------------------------------- |
| `getQuery(ref, args?)` | Read the current result for a query. `ref` can be a function reference or a string name. |
| `setQuery(ref, args, value)` | Set the local result for a query. |
### `client.action(fnName, args)`
[Section titled “client.action(fnName, args)”](#clientactionfnname-args)
Execute an action.
```ts
client.action(fnName: string, args: unknown): Promise
```
| Parameter | Type | Description |
| --------- | --------- | -------------------- |
| `fnName` | `string` | Action function name |
| `args` | `unknown` | Arguments to pass |
**Returns:** The action’s return value.
### `client.onConnectionChange(listener)`
[Section titled “client.onConnectionChange(listener)”](#clientonconnectionchangelistener)
Listen for connection state changes.
```ts
client.onConnectionChange(listener: (state: ConnectionState) => void): () => void
```
**Returns:** An unsubscribe function.
### `client.close()`
[Section titled “client.close()”](#clientclose)
Close the WebSocket connection and clean up.
```ts
client.close(): void
```
## Properties
[Section titled “Properties”](#properties)
### `client.connectionState`
[Section titled “client.connectionState”](#clientconnectionstate)
The current connection state.
```ts
client.connectionState: ConnectionState
```
```ts
type ConnectionState = "connecting" | "connected" | "disconnected";
```
| State | Description |
| ---------------- | --------------------------------------------------------------- |
| `"connecting"` | WebSocket is being established |
| `"connected"` | Connected and ready |
| `"disconnected"` | Not connected (will auto-reconnect unless `close()` was called) |
### `client.queryStore`
[Section titled “client.queryStore”](#clientquerystore)
The centralized query result cache (read-only access).
```ts
client.queryStore: QueryStore
```
#### `QueryStore.makeKey(fnName, args)`
[Section titled “QueryStore.makeKey(fnName, args)”](#querystoremakekeyfnname-args)
Create a deterministic cache key from a function name and arguments.
```ts
static makeKey(fnName: string, args: unknown): string
```
## Connection Behavior
[Section titled “Connection Behavior”](#connection-behavior)
* **Auto-reconnect:** The client automatically reconnects with exponential backoff when disconnected.
* **Message queuing:** Messages sent while disconnected are queued and flushed on reconnect.
* **Re-subscribe on reconnect:** All active subscriptions are automatically re-established.
* **Server reset handling:** If the server loses subscription state (e.g., after hibernation), the client re-subscribes all active queries.
## Persistence
[Section titled “Persistence”](#persistence)
When enabled, the client caches query results in IndexedDB for instant display on subsequent page loads.
```ts
const client = new ZerobackClient("wss://example.com/ws", {
persistence: true, // Use built-in IndexedDB adapter
maxCacheAge: 86400000, // 1 day cache
schemaVersion: "v2", // Clear cache on schema change
});
await client.init();
```
### Custom Persistence Adapter
[Section titled “Custom Persistence Adapter”](#custom-persistence-adapter)
Implement the `PersistenceAdapter` interface for custom storage backends:
```ts
interface PersistenceAdapter {
getAll(): Promise