Skip to content

Airbase Architecture

Understanding how Airbase works under the hood

This explanation provides a conceptual understanding of Airbase's architecture, how components interact, and design decisions that shape the platform.


What is Airbase?

Airbase is a modern deployment platform for Singapore Government developers, providing a Vercel-like experience for deploying applications to the Government Commercial Cloud (GCC).

Key features:

  • Simple deployment workflow - Build and deploy with 2 commands
  • Zero infrastructure knowledge - No Kubernetes or cloud expertise required
  • Security and compliance built-in - Government-grade security by default
  • Zero-ops maintenance - Platform team handles all infrastructure

Core principle: Developers focus on building applications, Airbase handles everything else.


High-Level Architecture

┌─────────────┐
│  Developer  │
└──────┬──────┘
       │ airbase CLI
┌─────────────────────┐
│   Airbase API       │
│   (REST/tRPC)       │
└──────┬──────────────┘
       ├──→ Container Registry
       │    (Store images)
       └──→ Kubernetes (AWS EKS)
            ├─ ArgoCD (GitOps)
            ├─ Ingress (HTTPS)
            └─ Pods (Containers)

Flow: 1. Developer builds container locally 2. CLI pushes image to registry via API 3. CLI triggers deployment via API 4. API updates GitOps repository 5. ArgoCD syncs changes to Kubernetes 6. Kubernetes creates/updates pods 7. Ingress exposes application via HTTPS


Core Components

1. Airbase CLI

Purpose: Developer interface to Airbase

What it does: - Authenticates users - Builds container images locally - Pushes images to registry - Triggers deployments - Manages environment variables

Technology: Node.js (current), Go (future)

Key commands:

airbase login          # Authenticate
airbase container build   # Build image
airbase container deploy  # Deploy to Kubernetes

Design choice: CLI runs locally to keep Docker builds fast and use developer's machine resources.

2. Airbase API

Purpose: Backend service orchestrating deployments

What it does: - Authenticates CLI requests - Manages project metadata - Coordinates container registry - Updates GitOps repository - Tracks deployment status

Technology: REST API (future), tRPC (current)

Security: - Token-based authentication - Project-level authorization - Audit logging

Design choice: Centralized API ensures consistent deployment logic and security policies.

3. Container Registry

Purpose: Store and distribute container images

What it does: - Receives images from CLI - Stores images securely - Provides images to Kubernetes - Manages image lifecycle

Technology: AWS ECR (Elastic Container Registry)

Access control: - CLI pushes via API credentials - Kubernetes pulls via service account - Images scoped per project

Design choice: Managed registry (ECR) provides reliability, security, and integration with AWS infrastructure.

4. Kubernetes Cluster (AWS EKS)

Purpose: Run and orchestrate containers

What it does: - Schedules containers on nodes - Manages container lifecycle - Handles scaling and restarts - Provides networking

Technology: AWS Elastic Kubernetes Service (EKS)

Configuration: - Multi-node cluster for high availability - Auto-scaling enabled - Security groups for network isolation - IAM roles for AWS service integration

Design choice: Kubernetes provides battle-tested container orchestration. EKS provides managed control plane.

5. ArgoCD (GitOps)

Purpose: Declarative deployment management

What it does: - Watches GitOps repository for changes - Syncs desired state to Kubernetes - Provides deployment history - Enables rollbacks

Technology: ArgoCD

GitOps workflow: 1. API updates application manifest in Git 2. ArgoCD detects change 3. ArgoCD applies manifest to Kubernetes 4. Kubernetes creates/updates resources

Design choice: GitOps provides: - Audit trail (Git history) - Declarative configuration - Easy rollbacks - Disaster recovery

6. Ingress Controller

Purpose: Route HTTPS traffic to applications

What it does: - Terminates TLS/SSL - Routes requests to correct pods - Handles load balancing - Manages SSL certificates

Technology: AWS Application Load Balancer (ALB) + Ingress controller

URL patterns: - Default environment: https://project-name.app.tc1.airbase.sg - Named environment: https://environment--project-name.app.tc1.airbase.sg

Design choice: Managed load balancer provides reliability and automatic SSL certificate management.


Deployment Flow (Detailed)

Step 1: Build Container Locally

airbase container build

What happens: 1. CLI reads Dockerfile from current directory 2. CLI calls local Docker daemon 3. Docker builds image using base images 4. Docker tags image with Airbase-specific tag 5. Image stored in local Docker cache

Location: Developer's machine

Why local: Faster builds, uses developer's CPU/memory, no upload of source code to remote build service.

Step 2: Push to Registry

What happens: 1. CLI calls Airbase API for registry credentials 2. API generates temporary upload token 3. CLI pushes image to registry using token 4. Registry stores image with project-scoped path

Location: AWS ECR

Image path: registry.tc1.airbase.sg/team-name/project-name:image-tag

Security: Temporary credentials, project isolation.

Step 3: Trigger Deployment

airbase container deploy --yes staging

What happens: 1. CLI sends deployment request to API 2. API validates request (authentication, authorization) 3. API reads project configuration (handle, port, instance type) 4. API reads environment variables from CLI 5. API generates Kubernetes manifest 6. API updates GitOps repository with new manifest

API-generated manifest example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: project-name-staging
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: app
        image: registry.tc1.airbase.sg/team/project:tag
        ports:
        - containerPort: 3000
        env:
        - name: PORT
          value: "3000"
        resources:
          requests:
            cpu: "250m"
            memory: "512Mi"

Step 4: ArgoCD Sync

What happens: 1. ArgoCD polls GitOps repository (every 3 minutes) 2. ArgoCD detects manifest change 3. ArgoCD calculates diff between desired and current state 4. ArgoCD applies changes to Kubernetes

Sync time: Usually 1-3 minutes after API update

Step 5: Kubernetes Deployment

What happens: 1. Kubernetes Deployment controller creates new ReplicaSet 2. Kubernetes schedules pods on available nodes 3. Kubelet pulls container image from registry 4. Kubelet starts container 5. Container passes readiness checks 6. Old pods are terminated (rolling update)

Zero-downtime: New pods start before old pods stop.

Step 6: Ingress Configuration

What happens: 1. Ingress controller detects new service 2. ALB configures routing rules 3. SSL certificate provisioned (if new subdomain) 4. Traffic routes to new pods

Propagation: 30-60 seconds for DNS and ALB updates.

Step 7: Application Live

Result: Application accessible at HTTPS URL

https://staging--project-name.app.tc1.airbase.sg

End-to-end time: 2-4 minutes from deploy command to live application.


Networking Architecture

External Access

Internet
AWS Route 53 (DNS)
*.app.tc1.airbase.sg → ALB
Ingress Controller
Kubernetes Service
Pods (Containers)

DNS Resolution

Pattern: <subdomain>.app.tc1.airbase.sg

Examples: - demo.app.tc1.airbase.sg → Default environment - staging--demo.app.tc1.airbase.sg → Staging environment

DNS management: Automated via AWS Route 53

Load Balancing

Strategy: Round-robin across healthy pods

Health checks: - TCP check on container port - HTTP check on / (if application responds) - Readiness probe (Kubernetes-level)

Unhealthy pods: Automatically removed from rotation

SSL/TLS

Certificates: AWS Certificate Manager (ACM)

Provisioning: Automatic for *.app.tc1.airbase.sg

Protocol: TLS 1.2 and 1.3 only

Termination: At ALB (not at container)


Security Architecture

Authentication & Authorization

User authentication: - OAuth via Airbase Console - Token stored locally (~/.airbase/credentials) - Token sent with all API requests

Project authorization: - Users have project-level permissions - API validates user can access project - Kubernetes RBAC enforces isolation

Network Security

Ingress: - Only HTTPS traffic allowed (443) - HTTP (80) redirects to HTTPS - Security groups restrict traffic

Egress: - Containers can access internet - AWS service endpoints - Restricted outbound ports

Pod-to-pod: - Network policies enforce isolation - Pods can't access other projects' pods

Container Security

Image scanning: - Base images regularly scanned - Vulnerabilities tracked - Security patches applied weekly

Runtime security: - Non-root containers (UID 999) - Read-only root filesystem (where possible) - Limited capabilities - Resource limits enforced

Secrets Management

Environment variables: - Encrypted at rest in API database - Injected at runtime via Kubernetes secrets - Never logged or exposed

Access control: - Only project members can view/update - Audit trail for all changes


Resource Management

Instance Types

Type vCPU Memory Ephemeral Storage Use Case
nano 0.25 500 MB 500 MB Small apps
b.small 0.5 1 GB 500 MB Standard apps

Implementation: Kubernetes resource requests/limits

Note: Additional instance types will be introduced at a later stage as we learn more about the workloads our users are bringing to the platform. In the meantime, our goal is to keep workloads small to better manage growing infrastructure costs.

Scaling

Current: Single replica per environment

Future: Horizontal pod autoscaling based on: - CPU utilization - Memory utilization - Custom metrics

Resource Limits

Enforced limits: - CPU: As per instance type - Memory: As per instance type - Storage: Ephemeral only (no persistent volumes yet)

Behavior on limit: - CPU: Throttled - Memory: Pod restarted (OOMKilled)


Reliability & Availability

High Availability

Kubernetes control plane: - Multi-AZ deployment - Managed by AWS (EKS) - 99.95% SLA

Worker nodes: - Distributed across availability zones - Auto-scaling group - Automatic replacement on failure

Disaster Recovery

GitOps repository: - All deployments stored in Git - Easy rollback to previous state - Disaster recovery: Restore from Git

Container images: - Stored in S3 (via ECR) - 99.999999999% durability - Cross-region replication

Monitoring

Infrastructure: - Kubernetes metrics - Node health - Pod health

Applications: - Container logs (CloudWatch) - Application metrics (future) - Error tracking (future)

Updates & Maintenance

Platform updates: - Kubernetes version upgrades - Node replacement - Zero-downtime rolling updates

User impact: None during platform maintenance


Design Decisions & Tradeoffs

1. Local Docker Builds

Decision: Build containers on developer's machine

Rationale: - Faster builds (local CPU) - No source code upload - Uses Docker layer caching - Familiar workflow

Tradeoff: Requires Docker installed locally

2. GitOps (ArgoCD)

Decision: Use GitOps for deployments

Rationale: - Audit trail in Git - Declarative configuration - Easy rollbacks - Disaster recovery

Tradeoff: 1-3 minute sync delay

3. Single Replica

Decision: One pod per environment (currently)

Rationale: - Simpler for developers - Cost-effective for small apps - Sufficient for most use cases

Tradeoff: Brief downtime during deployments (future: zero-downtime with multiple replicas)

4. Managed Services (AWS)

Decision: Use AWS managed services (EKS, ECR, ALB, ACM)

Rationale: - Reduced operational burden - Built-in reliability - Security compliance - Integration with government infrastructure

Tradeoff: Vendor lock-in, higher cost vs self-managed

5. Non-Root Containers

Decision: Enforce non-root user (UID 999)

Rationale: - Security best practice - Government compliance - Limits blast radius of compromise

Tradeoff: Slight complexity in Dockerfile

6. Strict CSP

Decision: Enforce Content Security Policy: script-src 'self'

Rationale: - XSS prevention - Government security requirements - Secure-by-default

Tradeoff: Inline scripts not allowed, requires code changes


Technology Stack Summary

Component Technology Purpose
CLI Node.js/TypeScript (→ Go) Developer interface
API REST API Backend coordination
Container Registry AWS ECR Image storage
Orchestration Kubernetes (EKS) Container management
GitOps ArgoCD Deployment automation
Load Balancer AWS ALB Traffic routing
DNS AWS Route 53 Domain management
Certificates AWS ACM SSL/TLS
Logging AWS CloudWatch Log aggregation
Monitoring Prometheus/Grafana Metrics & dashboards

Comparison with Alternatives

Airbase vs Raw Kubernetes

Aspect Airbase Raw Kubernetes
Setup None (managed) Complex (cluster setup, ingress, monitoring)
Deployment 2 commands Write YAML, apply manifests, manage secrets
Learning curve Minimal Steep
Maintenance Platform team Your team
Flexibility Opinionated Total control

Use Airbase when: You want simplicity and don't need Kubernetes-level control

Use Kubernetes when: You need advanced features like StatefulSets, custom controllers, specific networking

Airbase vs Heroku

Aspect Airbase Heroku
Target Singapore Government General public
Security Government-compliant Standard
Location Singapore (AWS) Global
Cost Internal charging Public pricing
Buildpacks No (use Dockerfiles) Yes
Add-ons Limited Extensive

Similarity: Both are PaaS platforms abstracting infrastructure

Difference: Airbase is container-based (Docker), government-focused


Future Roadmap

Planned Features

Horizontal scaling: - Multiple replicas per environment - Auto-scaling based on load

Persistent storage: - Persistent volumes for databases - Shared storage across replicas

Custom domains: - Bring your own domain - Custom SSL certificates

Advanced monitoring: - Application performance monitoring - Distributed tracing - Log aggregation UI

Database services: - Managed PostgreSQL - Managed Redis - Backup and restore

CI/CD integration: - GitHub Actions - GitLab CI - Automated deployments on push


See Also