See RavenFabric in action
Real terminal recordings from real infrastructure. Every command is E2E encrypted via Noise XX, policy-checked, and audited. The same static binary runs everywhere.
Multi-Node Ubuntu
Manage multiple Ubuntu systems remotely. Two agents connected through a relay broker — all traffic mutually authenticated and encrypted.
┌─────────────────┐ ┌─────────────────┐
│ rf-agent-1 │ │ rf-agent-2 │
│ Ubuntu 24.04 │ │ Ubuntu 24.04 │
│ token: agent1 │ │ token: agent2 │
└────────┬────────┘ └────────┬────────┘
│ WebSocket │ WebSocket
│ │
┌────┴────────────────────────┴────┐
│ rf-relay │
│ Ubuntu 24.04 │
│ :9091 (host-mapped) │
└────────────────┬─────────────────┘
│ port 9091
┌──────┴──────┐
│ rf CLI │
│ (your Mac) │
└─────────────┘
Containers
3 (1 relay + 2 agents)
Port
9091
Image
ubuntu:24.04
Encryption
Noise XX (E2E)
Setup
# Clone and build git clone https://github.com/egkristi/RavenFabric.git cd RavenFabric cargo build --release -p rf-cli # Start the demo (downloads agent/relay binaries automatically) cd demos/multi-node-ubuntu ./setup.sh # Execute commands on agents ($RELAY = relay host, typically the Docker host IP) rf --relay ws://$RELAY:9091 exec --token agent1 'hostname && uname -a' rf --relay ws://$RELAY:9091 exec --token agent2 'cat /etc/os-release | head -4' # Teardown ./setup.sh teardown
Policy Denial
Apply a restrictive policy to see deny-by-default in action. Safe read-only commands pass; dangerous commands are blocked and audited.
# Run the policy denial scenario ./scenarios/12-policy-denial.sh # Or do it manually — apply restrictive policy, restart agent # Then test allowed vs denied commands: # ALLOWED — read-only commands pass rf --relay ws://$RELAY:9091 exec --token agent1 'hostname' # > 4f2a1b3c9d7e rf --relay ws://$RELAY:9091 exec --token agent1 'uname -a' # > Linux 4f2a1b3c9d7e 6.x aarch64 GNU/Linux # DENIED — destructive and network commands blocked rf --relay ws://$RELAY:9091 exec --token agent1 'rm -rf /' # > Error: command denied by policy rf --relay ws://$RELAY:9091 exec --token agent1 'curl http://example.com' # > Error: command denied by policy rf --relay ws://$RELAY:9091 exec --token agent1 'apt install -y nmap' # > Error: command denied by policy # Every denial is recorded in the audit log rf --relay ws://$RELAY:9091 exec --token agent1 'cat /var/log/rf-audit.jsonl | tail -3'
Audit Trail
Every action — allowed or denied — produces a structured JSON audit entry. Each agent maintains its own append-only log.
# Run the audit trail scenario ./scenarios/13-audit-trail.sh # View the structured audit log (JSON-lines format) docker exec rf-agent-1 tail -3 /var/log/rf-audit.jsonl # > {"timestamp":"2026-05-09T...","command":"hostname","decision":"allowed",...} # > {"timestamp":"2026-05-09T...","command":"uname -a","decision":"allowed",...} # Count audit entries per agent docker exec rf-agent-1 wc -l < /var/log/rf-audit.jsonl # > 12 docker exec rf-agent-2 wc -l < /var/log/rf-audit.jsonl # > 5 # Each agent has its own independent, append-only audit log docker exec rf-agent-2 tail -1 /var/log/rf-audit.jsonl
Port Forwarding
SSH-style local port forwarding through Noise XX encrypted tunnels. Access remote services without firewall changes.
# Run the port forwarding scenario ./scenarios/14-port-forwarding.sh # Start a web server on the remote agent rf --relay ws://$RELAY:9091 exec --token agent1 \ 'python3 -m http.server 8000 --directory /tmp/www &' # Forward local port to agent's web server rf --relay ws://$RELAY:9091 forward --token agent1 \ -L $LOCAL:8080 -R $LOCAL:8000 # Now access the agent's service locally # curl http://$LOCAL:8080 → tunneled to agent1:8000 # Forwarding types: # Local: -L $LOCAL:8080 → agent:8000 # Reverse: --reverse agent:9000 → you:3000 # SOCKS5: --socks5 $LOCAL:1080 → agent → dest
Dev Mode (Zero-Setup)
One command starts a relay + agent in a single process. No Docker, no config files, no key exchange. Perfect for local development.
# Run the dev mode scenario ./scenarios/15-dev-mode.sh # Start dev mode (relay + agent in one process) rf dev # RavenFabric Dev Mode # Relay: $LOCAL:9090 # Token: dev # In another terminal, execute commands instantly rf exec --token dev 'hostname' rf exec --token dev --stream 'for i in 1 2 3; do echo $i; sleep 1; done' # Custom port and bind address rf dev --port 8080 rf dev --port 8080 --bind 0.0.0.0 # Stop with Ctrl+C — clean shutdown, no orphans
Fleet Orchestration
Execute commands across multiple agents with YAML playbooks. Supports parallel, sequential, rolling, and canary strategies with automatic rollback.
# Run the fleet orchestration scenario ./scenarios/16-fleet-orchestration.sh # Collect inventory from all agents for token in agent1 agent2; do rf --relay ws://$RELAY:9091 exec --token $token 'hostname' done # Run a parallel playbook (all agents at once) rf --relay ws://$RELAY:9091 playbook --token agent1 \ playbooks/parallel-update.yaml # Run a canary deploy (test 1 agent, then roll out) rf --relay ws://$RELAY:9091 playbook --token agent1 \ playbooks/canary-deploy.yaml # Strategies: parallel | sequential | rolling | canary # Rollback: automatic on failure (configurable)
Human Approval for AI Agents
AI agents connect via MCP but high-risk operations require human approval. The operator approves or denies before execution proceeds.
# Run the human approval scenario ./scenarios/17-human-approval.sh # AI requests approval via MCP tool # rf_request_approval( # command: "psql -c 'ALTER TABLE users ADD COLUMN role TEXT'" # reason: "Adding role column for RBAC feature" # ) → approval_id, status: PENDING # Operator reviews and approves/denies # approve("a1b2c3d4-...") → APPROVED # deny("a1b2c3d4-...") → DENIED # AI polls: rf_check_approval(id) → APPROVED # AI executes only if approved rf --relay ws://$RELAY:9091 exec --token agent1 \ 'echo "Command executed after human approval"' # Defense in depth: policy → approval → rate limit → audit
Multi-Distro Linux
One static musl binary runs on every major Linux distribution. No runtime dependencies, no compilation, no package manager needed.
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Ubuntu │ │ Debian │ │ Fedora │ │ Rocky │ │ Manjaro │
│ 24.04 │ │ 12 │ │ 41 │ │ 9 │ │ (Arch) │
└─────┬─────┘ └─────┬────┘ └─────┬────┘ └─────┬────┘ └─────┬────┘
└──────┬──────┴─────┬──────┴──────┬──────┴─────┬──────┘
│ │ │ │
┌──────────┐ │ ┌──────────┐ ┌──────────┐ │ ┌──────────┐
│ openSUSE │ │ │ Alpine │ │ Amazon │ │ │ Void │
└─────┬────┘ │ └─────┬────┘ └─────┬────┘ │ └─────┬────┘
└──────┴───────┴─────┬──────┴──────┴───────┘
│
┌────────┴────────┐
│ rf-relay │
│ :9092 (host) │
└────────┬────────┘
┌──────┴──────┐
│ rf CLI │
└─────────────┘
| Distribution | Image | Package Manager | libc | Token |
|---|---|---|---|---|
| Ubuntu 24.04 | ubuntu:24.04 | apt (deb) | glibc | ubuntu |
| Debian 12 | debian:12-slim | apt (deb) | glibc | debian |
| Fedora 41 | fedora:41 | dnf (rpm) | glibc | fedora |
| Rocky Linux 9 | rockylinux:9 | dnf (rpm) | glibc | rocky |
| Manjaro | manjarolinux/base | pacman | glibc | manjaro |
| openSUSE | opensuse/tumbleweed | zypper (rpm) | glibc | opensuse |
| Alpine 3.20 | alpine:3.20 | apk | musl | alpine |
| Amazon Linux 2023 | amazonlinux:2023 | dnf (rpm) | glibc | amazon |
| Void Linux | void-glibc-full | xbps | glibc | void |
Setup
# Start all 9 distro containers + relay cd demos/multi-distro-linux ./setup.sh # Query any distro ($RELAY = relay host, set by setup.sh) rf --relay ws://$RELAY:9092 exec --token ubuntu 'cat /etc/os-release | head -2' rf --relay ws://$RELAY:9092 exec --token alpine 'cat /etc/os-release | head -2' rf --relay ws://$RELAY:9092 exec --token fedora 'cat /etc/os-release | head -2' # Verify all agents respond ./setup.sh verify # Teardown ./setup.sh teardown
Policy Denial
Same deny-by-default engine works identically on glibc (Ubuntu) and musl (Alpine). Package managers, network tools, and destructive commands are all blocked.
# Run the policy denial scenario ./scenarios/policy-denial.sh # ALLOWED — read-only commands on Ubuntu (glibc) rf --relay ws://$RELAY:9092 exec --token ubuntu 'hostname' # > rf-ubuntu # DENIED — apt blocked on Ubuntu rf --relay ws://$RELAY:9092 exec --token ubuntu 'apt install -y nmap' # > Error: command denied by policy # ALLOWED — same policy on Alpine (musl-native) rf --relay ws://$RELAY:9092 exec --token alpine 'hostname' # > rf-alpine # DENIED — apk blocked on Alpine rf --relay ws://$RELAY:9092 exec --token alpine 'apk add nmap' # > Error: command denied by policy
Audit Trail
Identical structured audit logging across all 9 Linux distributions. Same JSON format regardless of glibc vs musl, apt vs dnf vs apk.
# Run the audit trail scenario ./scenarios/audit-trail.sh # View audit log on Ubuntu (glibc, apt-based) docker exec rf-ubuntu tail -2 /var/log/rf-audit.jsonl # > {"timestamp":"...","command":"hostname","decision":"allowed",...} # View audit log on Alpine (musl-native, apk-based) docker exec rf-alpine tail -2 /var/log/rf-audit.jsonl # > {"timestamp":"...","command":"hostname","decision":"allowed",...} # Count entries across distros for d in ubuntu alpine fedora rocky debian; do echo "$d: $(docker exec rf-$d wc -l < /var/log/rf-audit.jsonl)" done
Port Forwarding
Same forwarding mechanism works across all distributions. Tunnel to web servers on Ubuntu, Alpine, or Fedora agents through encrypted channels.
# Run the port forwarding scenario ./scenarios/port-forwarding.sh # Forward to Ubuntu agent's web server rf --relay ws://$RELAY:9092 forward --token ubuntu \ -L $LOCAL:8080 -R $LOCAL:8000 # Forward to Alpine agent (musl-native) rf --relay ws://$RELAY:9092 forward --token alpine \ -L $LOCAL:8081 -R $LOCAL:8000 # Forward to Fedora agent (rpm-based) rf --relay ws://$RELAY:9092 forward --token fedora \ -L $LOCAL:8082 -R $LOCAL:8000 # All tunnels are encrypted end-to-end, regardless of distro
Dev Mode (Zero-Setup)
Same statically-linked binary, same dev mode, works on every distro. No package manager needed — just copy the rf binary and run.
# Run the dev mode scenario ./scenarios/dev-mode.sh # Works identically on any Linux distribution # Ubuntu (glibc): rf dev → ready # Alpine (musl): rf dev → ready # Fedora (rpm): rf dev → ready # Zero dependencies — static binary, no libraries to install rf dev rf exec --token dev 'hostname && uname -r' # Dev mode: ~5 MB memory, < 1 second startup # Docker demo: ~500 MB, 30-60 seconds startup
Fleet Orchestration
One playbook deploys across all distributions. No per-distro agent packages — the static binary and orchestration engine work identically on glibc, musl, rpm, or deb.
# Run the fleet orchestration scenario ./scenarios/fleet-orchestration.sh # Fleet inventory across distros for distro in ubuntu alpine fedora debian rocky; do rf --relay ws://$RELAY:9092 exec --token $distro 'hostname && uname -r' done # Deploy to all distros in parallel for distro in ubuntu alpine fedora; do rf --relay ws://$RELAY:9092 exec --token $distro \ 'mkdir -p /opt/app && echo v2.0 > /opt/app/version.txt' done # Same playbook, any distro — no apt vs dnf differences
Human Approval for AI Agents
Same MCP server binary, same approval gate on every distro. AI agents get identical human-in-the-loop protection regardless of glibc, musl, or package manager.
# Run the human approval scenario ./scenarios/human-approval.sh # MCP server is a static binary — works on any distro # Ubuntu (glibc): rf-mcp-server → approval gate # Alpine (musl): rf-mcp-server → approval gate # Fedora (rpm): rf-mcp-server → approval gate # RBAC: different AI agents get different permissions # --callers config.toml maps tokens to policy profiles # Rate limiting: 60 req/min per session (configurable)
Kubernetes + CloudNativePG
Access a CloudNativePG PostgreSQL cluster through an encrypted tunnel. The agent runs as a Kubernetes Deployment with database credentials auto-injected from CNPG secrets.
┌─── Kubernetes ───────────────────────────┐
│ namespace: ravenfabric │
│ │
│ ┌──────────────┐ ┌─────────────────┐ │
│ │ CNPG Cluster │ │ rf-agent │ │
│ │ │◄──│ Deployment │ │
│ │ pg-cluster-1 │ │ │ │
│ │ (primary) │ │ postgres:17 │ │
│ │ │ │ + rf-agent │ │
│ │ pg-cluster-2 │ │ │ │
│ │ (replica) │ │ Token: cnpg │ │
│ └──────────────┘ └────────┬────────┘ │
│ ▲ pg-cluster-rw │ ws:// │
└──────┼───────────────────────┼───────────┘
│ ┌─────────────▼──────────┐
│ │ rf-relay (Docker) │
│ │ :9093 (host) │
│ └─────────────┬──────────┘
│ │
│ ┌─────────────▼──────────┐
│ │ rf CLI (your Mac) │
│ └────────────────────────┘
| Resource | Type | Description |
|---|---|---|
| rf-relay | Docker container | Relay broker (Ubuntu 24.04, port 9093) |
| ravenfabric | K8s Namespace | Isolated namespace for demo resources |
| pg-cluster | CNPG Cluster | 2-instance PostgreSQL (primary + replica) |
| pg-cluster-rw | K8s Service | Read-write endpoint (connects to primary) |
| rf-agent | K8s Deployment | RavenFabric agent with psql client |
| rf-agent-policy | K8s ConfigMap | Policy allowing all commands (demo-only) |
Setup
# Install CNPG operator (if not already installed) helm repo add cnpg https://cloudnative-pg.github.io/charts helm upgrade --install cnpg-operator cnpg/cloudnative-pg \ --namespace cnpg-system --create-namespace --wait # Deploy everything (relay + CNPG cluster + rf-agent) cd demos/kubernetes-cnpg ./setup.sh # Query PostgreSQL through the encrypted tunnel ($RELAY = K8s node IP) rf --relay ws://$RELAY:9093 exec --token cnpg 'psql -c "SELECT version();"' # Check replication rf --relay ws://$RELAY:9093 exec --token cnpg \ 'psql -c "SELECT client_addr, state FROM pg_stat_replication;"' # Teardown ./setup.sh teardown
Policy Denial
Restrict the agent to read-only SQL queries. SELECT passes; DROP, DELETE, and system commands are blocked. Policy is stored as a Kubernetes ConfigMap.
# Run the policy denial scenario ./scenarios/policy-denial.sh # ALLOWED — SELECT queries pass rf --relay ws://$RELAY:9093 exec --token cnpg 'psql -c "SELECT version();"' # > PostgreSQL 17.x on aarch64-unknown-linux-gnu # DENIED — DROP TABLE blocked by policy rf --relay ws://$RELAY:9093 exec --token cnpg 'psql -c "DROP TABLE demo;"' # > Error: command denied by policy # DENIED — curl blocked rf --relay ws://$RELAY:9093 exec --token cnpg 'curl http://example.com' # > Error: command denied by policy # Audit log shows every denial rf --relay ws://$RELAY:9093 exec --token cnpg 'cat /tmp/rf-audit.jsonl | tail -3'
Audit Trail
Every SQL query and system command executed through the tunnel is audited. Accessible via RavenFabric or kubectl.
# Run the audit trail scenario ./scenarios/audit-trail.sh # View audit log via the encrypted tunnel rf --relay ws://$RELAY:9093 exec --token cnpg 'tail -3 /tmp/rf-audit.jsonl' # > {"timestamp":"...","command":"psql -c \"SELECT version();\"","decision":"allowed",...} # Or view directly via kubectl kubectl exec -n ravenfabric deploy/rf-agent -c rf-agent -- tail -3 /tmp/rf-audit.jsonl # Count total audited actions rf --relay ws://$RELAY:9093 exec --token cnpg 'wc -l < /tmp/rf-audit.jsonl'
Port Forwarding
Forward PostgreSQL ports through encrypted tunnels. Access the database directly from your Mac — no kubectl, no kubeconfig, works through NAT.
# Run the port forwarding scenario ./scenarios/port-forwarding.sh # Forward local port to PostgreSQL (read-write primary) rf --relay ws://$RELAY:9093 forward --token cnpg \ -L $LOCAL:5432 -R pg-cluster-rw:5432 # Then connect directly with psql from your Mac # psql -h $LOCAL -p 5432 -U postgres -d app # Forward to read-only replica for reporting rf --relay ws://$RELAY:9093 forward --token cnpg \ -L $LOCAL:5433 -R pg-cluster-ro:5432 # vs kubectl: works through NAT, E2E encrypted, audited
Dev Mode (Zero-Setup)
Prototype rf commands locally before deploying to Kubernetes. Same syntax works in dev mode and against a real cluster.
# Run the dev mode scenario ./scenarios/dev-mode.sh # Prototype locally — no cluster required rf dev rf exec --token dev 'echo "SELECT 1" | psql ...' # Same command against real K8s — just change relay + token # rf --relay ws://relay.example.com exec --token cnpg 'psql ...' # Workflow: dev mode → prototype → deploy to K8s # 1. rf dev (instant local env) # 2. rf exec --token dev '...' (test commands) # 3. Deploy to K8s + real relay (production)
Fleet Orchestration
Coordinate database operations across pods with playbooks. Canary deploys, rolling maintenance, and automatic rollback — without kubectl scripting.
# Run the fleet orchestration scenario ./scenarios/fleet-orchestration.sh # Database health check via playbook rf --relay ws://$RELAY:9093 exec --token cnpg \ 'PGPASSWORD=$POSTGRES_PASSWORD psql -h pg-cluster-rw \ -U postgres -d app -c "SELECT version();"' # Coordinated maintenance (sequential strategy) # command: "psql ... -c 'VACUUM ANALYZE;'" # strategy: sequential # on_failure: stop_only # vs kubectl: built-in canary, rollback, audit trail # Works through NAT — no kubeconfig required
Human Approval for AI Agents
AI DBA assistant can SELECT freely, but schema changes and destructive operations require human approval. Webhook integration with Slack, PagerDuty, or GitOps.
# Run the human approval scenario ./scenarios/human-approval.sh # AI requests approval for a schema migration # rf_request_approval( # command: "psql -c 'ALTER TABLE users ADD COLUMN role TEXT'" # reason: "RBAC feature, ticket DB-1234" # ) # Operator approves via dashboard / Slack / webhook # AI executes the approved migration rf --relay ws://$RELAY:9093 exec --token cnpg \ 'PGPASSWORD=$POSTGRES_PASSWORD psql -h pg-cluster-rw \ -U postgres -d app -c "ALTER TABLE users ADD COLUMN role TEXT"' # vs kubectl: no human gate, no per-command audit, no rate limit
What happens when you run a command
1. Connect
CLI connects to the relay via WebSocket. Both sides are anonymous at this point.
2. Handshake
Noise XX mutual authentication. Both CLI and agent prove their identity with static keys.
3. Policy check
The agent checks the command against its local deny-by-default policy. No match = denied.
4. Execute
Command runs with timeout and output limits. Result is encrypted and sent back through the relay.
5. Audit
Every action (allowed or denied) produces a structured JSON audit entry. Append-only.
6. Reconnect
After the session ends, the agent reconnects to the relay with exponential backoff, ready for the next command.