Architecture & Dataflow
How Gatekeeper is built
A layered, security-first architecture designed for tenant isolation, data sovereignty, and operational transparency. Each layer has clear responsibilities and explicit security boundaries.
Client Layer
All user interaction runs in the browser. No native apps required. Static assets served from CDN edge.
CDN / Edge Layer
Edge layer handles all inbound HTTPS traffic, enforces security headers, and terminates TLS before passing requests to the API.
API Layer
All API access is authenticated. JWT tokens include tenant context. Every database query is governed by Postgres RLS policies.
Data Layer
Data is isolated per tenant at the database level using Postgres RLS. The credentials vault uses per-tenant envelope encryption — keys are never exposed to the application layer.
Agent Layer
Gatekeeper agents run inside your network and report outbound only. Scan results are processed locally and optionally synced. Credentials never leave the agent host.
Each layer communicates only with the layer immediately above or below it. No layer has direct access to layers it does not need. This limits blast radius in case of any component compromise.
Scan dataflow
What happens when a network scan runs
Every step in the scan pipeline is designed to process sensitive data locally and transmit only the minimum necessary telemetry to the cloud layer.
Network scan initiated
Admin triggers a scan from the web UI or agent schedule. Scan parameters are sent to the agent via the encrypted command channel.
Agent scans local subnet
The on-prem agent discovers devices using ICMP ping, TCP port probes, SNMP, and OS fingerprinting. All processing happens locally.
Results enriched locally
MAC vendor lookup, CVE cross-referencing, and CPE matching run inside the agent. Raw credentials and SNMP strings never leave the host.
Telemetry posted to API
Enriched, anonymised telemetry (device type, OS, status, CVEs) is posted to the Supabase API over TLS 1.3. Raw scan data is retained on-prem only.
API validates & stores
API authenticates the request, checks the RLS context for the sending agent, and writes to the tenant-isolated database partition.
Real-time push to UI
Updated records are pushed via Supabase Realtime WebSocket to all authenticated browser sessions for the tenant. No polling required.
Deployment topology
Dataflow per deployment model
The data path varies by deployment model. All models share the same security controls; the difference is where each component runs.
Cloud SaaS
Agent optional. Data stored in your chosen region. No on-prem infrastructure required.
Hybrid
Raw scans stay on-prem. Enriched telemetry flows to cloud. Credentials never transmitted.
On-Premise
All components run inside your datacenter. Zero external connectivity required for operation.
Air-Gapped
Zero telemetry. No external calls. Manual update procedure. Suitable for classified environments.
Data protection
Encryption, access control, and tenant isolation
Data protection is enforced at every boundary: storage, transit, access, and tenant isolation.
At Rest
- Postgres database: AES-256 encryption at the storage layer
- Credentials vault: envelope encryption with per-tenant keys
- File storage: AES-256 server-side encryption
- Backups: encrypted with daily rotation, 30-day retention
In Transit
- Browser ↔ CDN: TLS 1.3 (min), ECDHE key exchange
- Agent ↔ API: TLS 1.3, certificate pinning optional
- Internal services: encrypted via VPC private subnets
- Realtime WebSocket: WSS (TLS 1.3) at all times
Access Control
- Every API query filtered by Postgres RLS per tenant
- JWT contains tenant_id — cannot be spoofed without key
- MFA enforced for all admin roles
- SAML 2.0 / OIDC SSO supported for enterprise tenants
Tenant Isolation
- No shared tables — all tables include org_id column
- RLS policies enforce org_id = auth.uid() scope on every row
- Master accounts have cross-tenant read-only access only
- Agent tokens scoped to a single organisation
Questions about the architecture?
Our security team can walk you through the architecture in detail, provide network diagrams under NDA, and answer questions about specific compliance requirements.
