Skip to Content
Secret Sauce How the Hyperclay Hosted Platform Works

Secret Sauce: How the Hyperclay Hosted Platform Works

This is a technical blueprint for the Hyperclay hosted platform. Every architectural decision, every code pattern, every trade-off is documented here. A developer could read this and rebuild the system from scratch.

The stack is intentionally boring: Node.js, Express, PostgreSQL with Sequelize, Edge.js templates, PM2 for process management, Stripe for payments, Postmark for email, and Server-Sent Events for real-time sync. The interesting part isn’t the technology — it’s how the pieces fit together.

The Core Innovation: HTML Files as a Database

Traditional web apps shuttle data between a frontend, an API, and a database. Hyperclay collapses all three into one artifact: the HTML file. The DOM is the data model. Saving means writing the entire document.documentElement.outerHTML to disk. Next page load serves the modified file — changes are permanent.

The Save Pipeline

When a user saves, the client sends the full HTML string via POST. On the server, saveHTML runs through these steps:

  1. Format — The raw HTML is passed through js-beautify with force-expanded multiline attributes and 2-space indentation. SVG, script, and style tags are left unformatted. Sites can opt out by adding formathtml="false" to their <html> element.

  2. Deduplicate — The current file is read from disk and compared to the formatted HTML. If the content hasn’t changed, the save is skipped entirely — no backup, no broadcast, no wasted I/O.

  3. First-save safety net — If this site has zero backups in the database (e.g. it was created before the backup system existed), the current content is backed up before the new content overwrites it.

  4. Backup — The formatted HTML is passed to BackupService.createBackup, which stores either a full snapshot or a unified diff inside a PostgreSQL transaction.

  5. Write — The formatted HTML is written to /sites/{sitename}.html via dx('sites').createFileOverwrite().

  6. Tailwind compile — If the HTML contains a <link> referencing /tailwindcss/{sitename}.css, the Tailwind JIT compiler extracts classes and writes the compiled CSS to /public-assets/tailwindcss/{sitename}.css.

  7. Broadcast — The new content and a SHA-256 checksum are broadcast over SSE to any connected live-sync clients and the owner’s disk-sync engine.

// The actual save handler (simplified) const formattedHtml = formatHtml(html); const currentHtml = await dx('sites', `${node.name}.html`).getContents(); if (currentHtml === formattedHtml) return next(); // Skip if unchanged await BackupService.createBackup(node.id, formattedHtml, userId); await dx('sites').createFileOverwrite(`${node.name}.html`, formattedHtml); if (hasTailwindLink(formattedHtml, node.name)) { const css = await compileTailwind(formattedHtml); await dx('public-assets/tailwindcss').createFileOverwrite(`${node.name}.css`, css); } liveSync.broadcastFileSaved(ownerUsername, node.name, { content: formattedHtml, checksum: crypto.createHash('sha256').update(formattedHtml).digest('hex').substring(0, 16), modifiedAt: new Date().toISOString() });

The Three-Dimensional State Machine Router

Every request to Hyperclay is classified along three axes: who is making the request, what they’re accessing, and what action they’re performing. These three dimensions form a routing key like dev:regular_site:edit, and the routing table maps keys to handler pipelines.

User Types

TypeDescription
no_authNot logged in
app_userLogged-in user on a multi-tenant app (account_type = ‘app_user’)
devDeveloper account (account_type = ‘dev’) — can create sites, has billing
adminSuperadmin (isSuperadmin = true) — falls through to dev routes if no admin-specific route exists

Resource Types

TypeDescription
main_appThe hyperclay.com application itself (dashboard, auth pages, billing)
regular_siteA standard user-created site
multi_tenantA site with enableSignups = true
instanceA copy of a multi-tenant app owned by an app_user
folderDashboard folder navigation
uploadUser-uploaded files

The Eight-Step State Detection Pipeline

Before any route handler executes, eight middleware build a complete picture of the request:

state__init → Initialize req.state with safe defaults state__meta → Parse domain, method, path, body, query, subdomain state__user → Load Person from auth_token cookie, detect user type state__resource → Detect resource type, load Node from database, check ownership state__shareAccess → If valid share token, elevate no_auth → app_user with ownership state__action → Map URL path to action name (e.g. /edit → 'edit', /login GET → 'login-page') state__key → Build final key: '{userType}:{resourceType}:{action}' state__authcookies → Set ownership and resource cookies for the client

After detection, req.state contains everything needed for routing:

{ meta: { fullDomain, rootDomain, method, path, subdomain, isCustomDomain, body, query }, user: { person, isLoggedIn, isOwner, username, hasSubscription, siteLimit, canCreateSites }, resource: { node, siteName, customDomain, sourceNode, instanceOwner, currentFolderId }, userType: 'dev', resourceType: 'regular_site', action: 'edit', key: 'dev:regular_site:edit' }

Action Detection

The state__action middleware maps URL paths to action names. The rules:

  • Root path (/) → view
  • First segment is the action name: /editedit, /savesave, /dashboarddashboard
  • GET on auth pages gets a -page suffix: GET /loginlogin-page, POST /loginlogin. This lets the routing table serve the form on GET and process the submission on POST with the same URL.
  • Compound routes have special mappings: /edit/uploads/...edit-upload, /save/uploads/...save-upload, /live-sync/streamlive-sync-stream
  • Versioning routes remap: /version/123view-version, /restore/123restore-version, /versionsbackups
  • Unknown paths on sites fall through to view. If the resource is a site and the action isn’t in the known actions list (edit, save, login, signup, etc.), it’s treated as a view. This enables client-side routing — any path on a site renders the root HTML, and the client app can read window.location.pathname.

The remaining path segments are stored in req.state.actionParams, available to handlers. For /version/123, the action is view-version and actionParams is ['123'].

The Expansion Syntax

The + operator lets one route definition cover multiple combinations. This:

'dev+app_user:regular_site+instance:save': [requireOwnership, saveHTML, respond]

Expands at startup into four separate routes:

dev:regular_site:save dev:instance:save app_user:regular_site:save app_user:instance:save

Wildcard Matching

When the exact key has no match, the router tries fallbacks in order:

  1. Exact match: dev:regular_site:edit
  2. Admin fallback to dev: dev:regular_site:edit (if user is admin)
  3. Wildcard user: *:regular_site:edit
  4. Wildcard resource: dev:*:edit
  5. Full wildcard: *:*:edit

If nothing matches, the request gets a 403.

The State Factory

Route handlers use a state() factory for inline condition checks:

// Require ownership — returns 400 if false 'dev:regular_site:edit': [ requireOwnership, serveCodeEditor ] // Require condition with custom message state(s => s.user.person?.hasActiveSubscription).require('Active subscription required') // Redirect if condition fails state(s => false).redirect('/dashboard', 'You already have an account.')

Key Routes

KeyWhat it does
*:main_app:viewServe the homepage
dev:main_app:dashboardServe the dashboard (folder browser, site list)
dev:main_app:newCreate a new site
*:regular_site+multi_tenant+instance:viewServe a site’s HTML
dev:regular_site+multi_tenant+instance:editServe the code editor (ownership required)
dev+app_user:regular_site+multi_tenant+instance:saveSave HTML (ownership required)
*:multi_tenant:signupSign up for a multi-tenant app, create instance
dev+app_user:*:live-sync-streamSSE stream for real-time sync
*:regular_site+multi_tenant+instance:dataData extraction API

The Response Layer

The routing table maps keys to handler pipelines, but each pipeline needs to produce output. The respond module provides a set of response functions that handle the browser-vs-AJAX split automatically.

FunctionWhat it does
respond.success(res, { msg, html })Sends JSON with msgType: 'success'. Used by save, create, and update operations.
respond.error(res, error)Browser requests get an Edge.js error page. AJAX requests get JSON. Sequelize errors are auto-classified: SequelizeUniqueConstraintError → 409, SequelizeValidationError → 400.
respond.redirect(res, url, params)Builds a URL with query parameters (msg, msgType, return) and issues a 302.
respond.html(res, html)Sends raw HTML, prepending <!DOCTYPE html> if missing.
respond.nodes({ res, person, folderPath, msg })Reloads the person via getPersonWithNodes, renders the components/nodes Edge.js partial, returns JSON with the rendered HTML fragment.

respond.nodes is how the dashboard stays in sync — after creating, deleting, or moving a site, the client gets back the updated node list as a pre-rendered HTML string, no second request needed.

The routing table also uses serve('home') (renders a named Edge.js template for browser requests) and asyncHandler(fn) (wraps async handlers with try/catch, routing uncaught errors to respond.error).

The request type is detected via isBrowserRequest(), which checks the Accept header. This means the same route can serve both a full HTML page (direct navigation) and a JSON response (AJAX call from the client framework).

The db Middleware Factory

Route handlers often need to check database conditions before proceeding — does this email already exist? does this site name conflict? The db object provides a Proxy-based API for expressing these checks declaratively inside the routing table:

// Ensure no Person exists with this email (for signup) db.person({ email: req => req.body.email }).exists(false) // Ensure the node exists (404 if not) db.node({ name: req => req.body.name }).exists(true) // Update the current user's record db.person().update({ username: req.body.username }) // Delete matching records db.customDomain({ id: req => req.body.domainId }).destroy()

Each call returns a standard Express middleware function. The query parameter accepts static values or functions that receive req — values are resolved at execution time, not at startup.

.exists(false) returns 409 if a record is found. .exists(true) returns 404 if not found. .update(data) finds the record and updates it. .destroy() deletes matching records. .create(data) inserts a new record and stores it on req.createdRecord for later middleware.

Special handling for Person and Node: calling db.person().update(...) with no query defaults to req.state.user.person, and db.node().update(...) defaults to req.state.resource.node. These records were already loaded by the state pipeline, so no redundant lookup is needed.

The Proxy intercepts any property access, so db.customDomain(...), db.message(...), db.siteBackups(...) etc. all work for any registered Sequelize model.

The Hybrid Storage System

Hyperclay uses PostgreSQL for metadata and the filesystem for content.

Database Models

ModelPurpose
PersonUser accounts — email, username, password (bcrypt), stripeCustomerId, account_type (dev/app_user), isSuperadmin
NodeUnified entity for sites, folders, and uploads — name, type, parentId, path, enableSignups, shareToken, sourceNodeId
PersonNodeOwnership junction table (many-to-many between Person and Node)
SiteBackupsVersion history — html/diffContent, isDiff, diffFromId, snapshotId, contentHash, backupNumber
CustomDomainDomain mappings — domain, status (pending/verifying/active/failed), sslStatus, saasCustomDomainsId
LoggedInTokenAuth tokens — token (secure random), personId, expires (30 days from creation)
EmailConfirmTokenEmail verification tokens
MessageContact form submissions
EventActivity tracking (logins, saves, signups)
ApiKeySync API keys — keyHash (SHA-256), keyPrefix (hcsk_), expiresAt, isActive

The Node model is the unifying abstraction. Sites, folders, and uploads are all Nodes with a type field. Folders form a tree via parentId. Instances link to their source via sourceNodeId. This lets the dashboard show sites, folders, and uploads in a single query via the PersonNode join.

Filesystem Layout

/sites/{sitename}.html — Live site files /sites-versions/{sitename}/ — Timestamped backup files (legacy, pre-diff system) /sites-deleted/{sitename}.html — Soft-deleted sites /uploads/{username}/{path}/ — User-uploaded files /public-assets/tailwindcss/{name}.css — Compiled Tailwind CSS per site

The dx.js File Utility

All filesystem operations go through dx, a chainable async API that absorbs errors instead of throwing:

// Read — returns null if file doesn't exist const html = await dx('sites', 'mysite.html').getContents(); // Write — creates directories automatically await dx('sites').createFileOverwrite('mysite.html', html); // Check existence const exists = await dx('sites', 'mysite.html').exists(); // Copy with automatic directory creation await dx('sites-deleted', 'mysite.html').copyFileFrom('sites', 'mysite.html'); // JSON operations await dx('config.json').appendJSON({ username: 'alice' }); const value = await dx('config.json').getKey('username'); // Chainable — copy then rename await dx('backups', 'site.html') .copyFileFrom('sites', 'site.html') .renameTo('site-backup.html'); // Null safety — returns null instead of crashing await dx('users', undefined).append('data'); // returns null

The chainable mechanism uses a Proxy that wraps each async operation, passing the resolved path from one step to the next. Native array methods like .length, .reverse(), and [0] work on the resolved values.

The Diff-Based Backup System

Every save creates a backup. Every 20th backup is a full HTML snapshot stored in the SiteBackups table. The 19 saves between snapshots are stored as unified diffs (via the diff npm package) against the previous backup.

How It Works

// Inside BackupService.createBackup, within a transaction: const backupNumber = lastBackupNumber + 1; const isSnapshot = backupNumber % 20 === 0 || backupNumber === 1; if (isSnapshot) { // Store full HTML await SiteBackups.create({ nodeId, html, contentHash, isDiff: false, backupNumber }); } else { // Get previous backup and create diff const previousHtml = await lastBackup.getFullHtml(); const diff = createPatch(String(lastBackup.id), previousHtml, html); await SiteBackups.create({ nodeId, diffContent: diff, diffFromId: lastBackup.id, contentHash, isDiff: true, snapshotId, backupNumber }); }

Reconstruction

To reconstruct any backup’s full HTML, getFullHtml() does:

  1. If the backup is a snapshot (isDiff: false), return html directly
  2. Otherwise, load the nearest snapshot (tracked by snapshotId)
  3. Load all diffs between the snapshot and this backup, ordered by ID
  4. Validate chain continuity — each diff’s diffFromId must equal the previous backup’s ID
  5. Apply diffs sequentially using applyPatch
  6. Verify the final SHA-256 hash matches contentHash

Concurrency Safety

The entire createBackup call runs inside a PostgreSQL transaction that takes a row-level lock on the Node:

await Node.findByPk(nodeId, { transaction, lock: transaction.LOCK.UPDATE });

This serializes concurrent saves for the same site, preventing two backups from both reading the same “last backup” and creating conflicting diffs.

Multi-Tenant Architecture

Any site becomes a multi-tenant platform by setting enableSignups = true on its Node record. The state machine detects this and classifies the resource as multi_tenant instead of regular_site.

Instance Creation Flow

When a user signs up on a multi-tenant app:

  1. Create account — A Person record with account_type: 'app_user' and a bcrypt-hashed password
  2. Generate name — The instance gets named {appname}-by-{username}. If taken, it becomes {appname}-by-{username}-2, etc.
  3. Create Node — A new Node with type: 'site' and sourceNodeId pointing to the source app
  4. Copy HTML — The source site’s HTML file is copied to /sites/{instancename}.html
  5. Create backup — An initial timestamped backup is created
  6. Send confirmation — An email with a confirmation link is sent
const instanceName = await generateInstanceName(sourceNode.name, username, sourceNode.id); const instanceNode = await Node.create({ name: instanceName, type: 'site', sourceNodeId: sourceNode.id, parentId: 'root' }); await instanceNode.setPeople([person]); const sourceHtml = await dx('sites', `${sourceNode.name}.html`).getContents(); await dx('sites').createFileOverwrite(`${instanceName}.html`, sourceHtml || '');

Instance URLs

Instances are accessed at {instancename}.{domain}:

  • Hyperclay subdomain: myapp-by-alice.hyperclay.com
  • Custom domain: myapp-by-alice.myapp.com (requires wildcard domain *.myapp.com)

Each instance evolves independently from the source. Editing the source app doesn’t change existing instances.

Wildcard Domain Requirements

If a multi-tenant app uses custom domains, enabling signups requires a wildcard domain (*.myapp.com) so that instance subdomains can be routed. Without custom domains, instances use Hyperclay subdomains by default.

Authentication and Access Control

Token-Based Auth

Login creates a LoggedInToken record with a cryptographically secure random ID and a 30-day expiry:

// LoggedInToken model { token: idSecure(), // Secure random string personId: person.id, expires: Date.now() + (30 * 24 * 60 * 60 * 1000) // 30 days }

The token is stored as an httpOnly cookie. For Hyperclay subdomains, it’s scoped to .hyperclay.com (the leading dot covers all subdomains). For custom domains, SameSite: none and secure: true enable cross-domain cookie sharing.

On every request, state__user looks up the token, loads the Person with their Nodes (via eager loading), calculates site limits, and sets the user type.

Eager Loading with getPersonWithNodes

The getPersonWithNodes function is used everywhere a fully-populated user is needed — during login, on every authenticated request, after account creation. It loads the Person with all their Nodes in a single query via Sequelize eager loading:

const person = await Person.findOne({ where: { id: personId }, include: [{ model: Node, through: PersonNode, include: [{ model: Node, as: 'SourceNode', required: false }] }], order: [[Node, 'createdAt', 'ASC']] });

This returns a Person with person.Nodes — an array of all sites, folders, and uploads they own, each with its SourceNode if it’s an instance. The dashboard, site limit calculation, ownership checks, and folder navigation all read from this pre-loaded array instead of making additional queries. The createdAt ASC ordering means the dashboard shows sites in creation order.

The cookieManager handles the complexity of setting cookies that work across both Hyperclay subdomains and custom domains:

// For Hyperclay domains — scope to apex so all subdomains share auth setCookieOnApexAndAllSubdomains(res, 'auth_token', token, { maxAge: 30 * 24 * 60 * 60 * 1000, httpOnly: true }); // For custom domains — SameSite=none enables cross-domain sharing res.cookie('auth_token', token, { maxAge, httpOnly: true, secure: true, sameSite: 'none', domain: req.state.meta.rootDomain });

The same split applies to ownership cookies. When a user visits a site, cookieManager.setOwnershipCookie sets currentResource (the site name) and isAdminOfCurrentResource (whether they own it) on the appropriate domain. These cookies are readable by client JavaScript — the client-side hyperclay.js framework uses them to decide whether to show edit controls.

On logout, clearAuthCookies clears all auth and ownership cookies across every possible domain variation to prevent stale sessions.

Two Account Types

TypeHow they’re createdWhat they can do
devSign up via Stripe checkout → set passwordCreate sites, edit code, manage billing, custom domains
app_userSign up on a multi-tenant appEdit their own instance, manage their uploads

Share Token System

Any site can generate a share token — a long random string stored on the Node. When a visitor arrives with ?token=... in the URL, state__shareAccess middleware:

  1. Validates the token against the Node’s shareToken field (must be shareEnabled: true)
  2. Sets req.state.hasShareAccess = true and req.state.user.isOwner = true
  3. Elevates no_auth users to app_user (devs keep their dev type)
  4. Stores the token in a site-specific cookie (share_{sitename}) for future visits

This means share link recipients can save changes without creating an account.

Password Security

  • Bcrypt hashing with salt rounds of 10
  • 8-character minimum
  • Checked against a top-100k common passwords list loaded at server startup
  • Timing-safe comparison on failed lookups (dummy bcrypt compare to prevent enumeration)

Real-Time Collaboration via SSE

The livesync-hyperclay library handles real-time sync between browser editors and the disk sync engine.

SSE Stream

Clients connect via GET:

GET /live-sync/stream

The server sets SSE headers, registers the client, sends a : connected comment, and starts 30-second keep-alive pings. On disconnect, the client is unregistered.

res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.setHeader('X-Accel-Buffering', 'no'); liveSync.subscribe(file, res); const keepAlive = setInterval(() => { res.write(': ping\n\n'); }, 30000);

Save Broadcast

When a save happens (from the code editor, the visual editor, or the sync API), two broadcasts fire:

  1. Browser broadcastliveSync.broadcast(file, { html, sender }) sends to all SSE clients viewing that file. The sender ID lets clients ignore their own saves.
  2. Sync engine broadcastliveSync.broadcastFileSaved(username, siteName, { content, checksum, modifiedAt }) notifies the owner’s hyperclay-local sync tool.

Access Control

Live-sync routes require ownership (checked via the state machine):

'dev+app_user:regular_site+multi_tenant+instance:live-sync-stream': [ requireOwnership, handleLiveSyncStream ]

Share guests get access because state__shareAccess elevates them and sets isOwner = true.

The Sync API

The Sync API powers hyperclay-local, a CLI tool that syncs files between a developer’s local filesystem and the platform. It uses a completely separate authentication system from the browser-based state machine — API keys instead of cookies.

API Key Authentication

Keys use a hcsk_ prefix followed by 32 random bytes (hex-encoded). The raw key is shown once during generation and stored as a SHA-256 hash. Keys expire after 1 year. Each request validates the X-API-Key header by hashing the provided key and looking up the hash in the ApiKey table.

const keyHash = crypto.createHash('sha256').update(apiKeyHeader).digest('hex'); const apiKey = await ApiKey.findValidKey(keyHash);

The authenticateApiKey middleware runs before all sync endpoints — it’s separate from the state machine pipeline. On success, it sets req.state.user.person to the key’s owner, so downstream handlers can use the same person reference.

Endpoints

EndpointMethodPurpose
/sync/statusGETServer time for clock synchronization
/sync/streamGETSSE stream for real-time file change notifications
/sync/filesGETList all owned sites with checksums and modification times
/sync/uploadPOSTUpload or create a site file
/sync/download/*GETDownload a site’s HTML content
/sync/uploadsGETList all owned upload files
/sync/uploads/*GETDownload an upload file (base64 encoded)
/sync/uploadsPOSTUpload or create an upload file (base64)

Sync Upload Flow

The upload endpoint does more than write a file:

  1. Parse path — Splits folder/subfolder/sitename into folder components and site name
  2. Create folders — If the path includes folders, creates Node records for each level that doesn’t exist yet, with proper parent-child relationships
  3. Check name collisions — If the site name exists globally but is owned by someone else, returns 409. If owned by the same user in a different folder, reuses the existing Node.
  4. Backup — If the file already exists on disk, creates a backup via BackupService.createBackup before overwriting
  5. Write — Writes to /sites/{sitename}.html (always flat on disk, folder paths are metadata only)
  6. Tailwind — Compiles Tailwind CSS if the HTML references a Tailwind stylesheet
  7. Broadcast — Sends the update to both the live-sync system (for connected browsers) and the sync engine (for other devices running hyperclay-local)

SSE Stream Per User

Unlike the browser live-sync (which subscribes to individual files), the sync stream subscribes to an entire user’s file changes via liveSync.subscribeUser. When any of the user’s files is saved — from the browser editor, another device’s hyperclay-local, or the visual editor — the stream pushes the update.

Dual Broadcast

When a sync upload includes a snapshotHtml field and senderId, two additional broadcasts fire:

  1. liveSync.broadcast(siteName, { html, sender }) — pushes to browser editors viewing the file
  2. liveSync.broadcastToUser(username, siteName, { html, sender }) — pushes to other hyperclay-local instances

This enables three-way sync: local filesystem ↔ platform ↔ browser editor. The sender ID prevents echo loops — each client ignores broadcasts from itself.

Tailwind CSS On-Save Compilation

The tailwind-hyperclay library provides four functions used across the codebase:

FunctionPurpose
hasTailwindLink(html, siteName)Checks if HTML contains <link href="/tailwindcss/{siteName}.css">
hasAnyTailwindLink(html)Checks if HTML contains any Tailwind link (for copies/renames)
compileTailwind(html)Extracts classes from HTML, runs Tailwind v4 JIT, returns CSS string
replaceTailwindLink(html, oldName, newName)Updates the link href when a site is renamed or copied

Compilation hooks into every operation that changes HTML content:

  • Save (saveHTML) — compile if hasTailwindLink
  • Sync upload (uploadSyncFile) — compile if hasTailwindLink
  • Backup restore (restoreBackup) — compile if hasTailwindLink
  • Site copy (copySiteComplete) — detect with hasAnyTailwindLink, replace link with replaceTailwindLink, then compile

Output is written to /public-assets/tailwindcss/{sitename}.css. The Express server serves this path with a fallback — if the file doesn’t exist yet (first load before any save), it returns empty CSS instead of a 404.

Domain Routing

Domain Classification

The classifyDomain function in state__meta determines the request type:

DomainClassification
hyperclay.com or www.hyperclay.commain_app
*.hyperclay.comhyperclay_subdomain
Dev tunnel domaindev_tunnel (treated like main_app)
Everything elsecustom_domain

Subdomain Resolution

For Hyperclay subdomains, state__resource does:

  1. Check if the subdomain matches an instance name (a Node with sourceNodeId != null)
  2. If yes, load the instance, its source node, and its owner → ResourceType.INSTANCE
  3. If no, look for a regular site by name
  4. If the site has enableSignups = trueResourceType.MULTI_TENANT, else → ResourceType.REGULAR_SITE

Custom Domain Resolution

For custom domains:

  1. Look up the exact hostname in the CustomDomain table (status must be active)
  2. If not found and there’s a subdomain, try the root domain (for instance subdomains on custom domains)
  3. Load the associated Node
  4. If the Node has enableSignups and there’s a subdomain, look for an instance → ResourceType.INSTANCE
  5. Otherwise serve the site directly

Custom Domain Lifecycle

Custom domains are managed through the SaasCustomDomains.com API:

  1. User adds a domain → POST to SaasCustomDomains API, creates CustomDomain record with status pending
  2. SaasCustomDomains provides DNS instructions (CNAME or TXT records)
  3. User updates DNS
  4. Webhook fires on DNS verification → status moves to active
  5. SSL is provisioned automatically
  6. Webhook fires on SSL issuance → sslStatus moves to issued

Wildcard domains (*.myapp.com) use DNS-01 challenge type for SSL. Per-user rate limiting caps API calls at 10 per minute.

The Data Extraction API

Any site’s HTML can be queried as structured data by appending ?data={...} to its URL. The extraction rules use a CSS-selector-based syntax.

Rule Types

String rules — Extract text or attributes:

".title" → text content of first .title element ".logo@src" → src attribute of first .logo element "@data-user-id" → attribute from root element ".tag[]" → array of text from all .tag elements

Array rules — Iterate over elements:

// [selector, shape] — returns array of shaped objects [".product", { name: ".name", price: ".price" }] // Result: [{ name: "Widget A", price: "$99" }, { name: "Widget B", price: "$149" }]

Object rules — Compose nested structures:

{ user: { name: ".user-name", role: ".user-role" }, metrics: { revenue: ".revenue", orders: ".orders" }, products: [".product", { name: ".name", price: ".price" }], tags: ".tag[]" }

DOM Property Access

The @ prefix also supports DOM properties: @textContent, @innerHTML, @outerHTML, @value, @checked, and many more.

Relaxed JSON Parser

Since rules are passed as URL query parameters, the extraction endpoint includes a custom tokenizer that handles unquoted keys and CSS selectors (including pseudo-selectors with colons and attribute selectors with brackets) without requiring strict JSON escaping.

Caching

Results are cached in memory for 5 minutes per site + query combination. Cache entries are cleaned when the map exceeds 100 entries.

File Uploads

Storage

Uploads are stored directly on the filesystem at /uploads/{username}/{folder_path}/{filename}. The Node model tracks them in the database for folder organization, with parentId linking to a folder Node.

Processing

Upload handling uses Formidable for multipart parsing. Files pass through validation:

  • Filename sanitization via sanitize-filename
  • Extension-based type detection via mime-types
  • Size limits: 20MB for JSON bodies, 5MB for text bodies, 2MB for URL-encoded forms

Editable Files

Text-based uploads (json, md, htm, html, css, js, jsx, svg) can be opened in the code editor at /edit/uploads/{username}/{path}/{filename}. Saving routes to a separate save-upload endpoint.

Folder Management

Folders are Node records with type: 'folder'. They support:

  • 5 levels of nesting (enforced by a beforeValidate hook)
  • Breadcrumb navigation via the path field (stores the ancestor path)
  • Move operations between folders (with path recalculation for all descendants)

Payment Integration

Stripe Checkout Flow

  1. Unauthenticated user submits email on the pricing page
  2. Server creates a Stripe checkout session with the email
  3. User completes payment on Stripe’s hosted page
  4. Stripe webhook (/stripe-webhook) fires with raw body (parsed before Express JSON middleware)
  5. Server verifies the webhook signature
  6. On checkout.session.completed: create Person with hasActiveSubscription: true and an EmailConfirmToken
  7. User follows the confirmation link → sets username and password → account is ready

Site Limits

const BASE_SITE_LIMIT = 15; // Starting number of apps const APPS_PER_MONTH = 3; // Additional apps per month const MAX_SITE_LIMIT = 140; // Maximum cap // For active subscribers: const monthsActive = differenceInMonths(new Date(), person.createdAt); const siteLimit = Math.min(BASE_SITE_LIMIT + (monthsActive * APPS_PER_MONTH), MAX_SITE_LIMIT); // Superadmins: unlimited // Non-subscribers: 15

Billing Portal

Subscription management is a single redirect to Stripe’s hosted billing portal — no custom UI needed.

The Code Editor

The /edit route serves a CodeMirror 6 editor via an Edge.js template. The editor loads the site’s current HTML, provides syntax highlighting, and integrates with the live-sync system.

Access Control

Only dev users who own the site can access the editor. Share guests are elevated to app_user, which is explicitly forbidden from the code editor (app_user:*:edit returns 403). This is intentional — share access grants save permission (for the visual editor) but not code editing.

'dev:regular_site+multi_tenant+instance:edit': [requireOwnership, serveCodeEditor], 'app_user:regular_site+multi_tenant+instance:edit': [forbidden],

Upload Editing

Text-based uploads get a separate editor route at /edit/uploads/{username}/{path}. Ownership is checked by comparing the URL’s username to the logged-in user’s username.

Security Model

Declarative Access Control

Security is structural. The routing table is the access control policy:

// Only devs who own the site can edit 'dev:regular_site:edit': [requireOwnership, serveCodeEditor] // Nobody without auth can edit anything 'no_auth:*:edit': [redirect('/login?...')] // App users can save (their instances), but not edit code 'app_user:regular_site+instance:save': [requireOwnership, requireEmailConfirmForAppUser, saveHTML, respond] 'app_user:regular_site+instance:edit': [forbidden]

Every request that doesn’t match a routing key gets a 403. There’s no default-allow behavior.

Rate Limiting

The createRateLimiter factory produces configurable middleware:

  • Login: 20 attempts per 15 minutes, keyed on sha256(IP + email), resets on success
  • API: 100 requests per 15 minutes, keyed on IP only
  • Strict (sensitive operations): 5 attempts per hour

Rate limiters use in-memory Maps with periodic cleanup.

Password Security

  • Bcrypt with 10 salt rounds
  • 8-character minimum enforced server-side
  • Checked against a top-100k common passwords list (loaded from PwnedPasswordsTop100k.json at startup)
  • Timing-safe comparison: on failed lookup, a dummy bcrypt compare runs to prevent email enumeration via response timing
CookieFlags
auth_tokenhttpOnly, secure, SameSite=lax, domain=.hyperclay.com, 30-day expiry
auth_token (custom domain)httpOnly, secure, SameSite=none, scoped to root domain
isAdminOfCurrentResourcesecure (readable by client JS for UI state)
share_{sitename}httpOnly, secure, SameSite=lax, 1-year expiry

Additional Protections

  • Sequelize parameterized queries prevent SQL injection
  • Stripe webhook signature verification (raw body, not parsed JSON)
  • Node name validation blocks reserved words and special characters
  • File upload sanitization via sanitize-filename
  • Upload size limits (20MB/5MB/2MB by content type)

Deployment

Production

// ecosystem.config.js { name: 'hyperclay', script: './hey.js', instances: 1, autorestart: true, max_memory_restart: '1G' }

A single PM2 instance. No cluster mode — the filesystem-based architecture means a single process avoids file locking complexity. The 1GB memory limit triggers an automatic restart if the process leaks.

Development

Dev mode runs four PM2 processes:

ProcessWhat it does
hyperclayExpress server with file watching on hey.js and server-lib/
tailwind-devPostCSS watcher compiling the admin UI’s Tailwind CSS
backup-devPeriodic local backup script
malleabledocs-devCloudflare tunnel for external access during development

Database

PostgreSQL with Sequelize ORM. Connection pool: min 2, max 10, 30-second acquire timeout, 10-second idle timeout. Models auto-sync on startup (no manual migrations needed in development).

Graceful Shutdown

The server listens for SIGTERM and closes the HTTP server cleanly, letting in-flight requests complete before exiting:

process.on('SIGTERM', () => { server.close(() => process.exit(0)); });

The Philosophy

Hyperclay works because it embraces constraints. Single HTML file per app limits complexity. DOM-as-database eliminates impedance mismatch. Direct manipulation means what you see is literally what gets saved.

The trade-offs are real:

  • No complex queries — You can’t JOIN across sites or SELECT from the HTML. The Data Extraction API gives you CSS-selector-based reads, but nothing like SQL.
  • Entire HTML in memory — Every save reads and writes the full file. Sites with hundreds of thousands of DOM nodes will hit performance limits.
  • Last-save-wins — There’s no conflict resolution. The live-sync system helps, but two simultaneous saves mean the last one overwrites the first.

What it gains:

  • Radical simplicity — The entire backend is one Express app with one routing table. No API layer, no ORM for content, no frontend build step.
  • Instant persistence — Save and it’s live. No deploy step, no cache invalidation, no eventual consistency.
  • True ownership — Users own their HTML files. They can download, modify, and re-upload them. The platform adds features on top; it doesn’t lock content away.
Last updated on