Skip to content

Local Development Setup

This guide walks you through setting up a local development environment for Omnia.

Install the required tools:

Tilt provides hot-reload development with automatic file syncing. Changes to dashboard source files are reflected instantly without rebuilding Docker images.

Terminal window
kind create cluster --name omnia-dev
Terminal window
# Core features only
tilt up
# Or with enterprise features (Arena Fleet, NFS, Redis)
ENABLE_ENTERPRISE=true tilt up

This will:

  • Build the operator and dashboard images
  • Deploy them to your local cluster via Helm
  • Set up port forwards automatically
  • Watch for file changes and sync them instantly
VariableDefaultDescription
ENABLE_ENTERPRISEfalseEnable enterprise features (Arena controller, NFS, Redis)
ENABLE_DEMOfalseEnable demo mode with Ollama + OPA
ENABLE_OBSERVABILITYtrueEnable Prometheus/Grafana
ENABLE_FULL_STACKfalseEnable Istio, Loki, Alloy
ENABLE_LANGCHAINfalseEnable LangChain runtime demos

When ENABLE_ENTERPRISE=true, the following additional components are deployed:

  • Arena Controller: Manages ArenaSource and ArenaJob resources
  • Arena Worker: Executes evaluation jobs
  • NFS Server: Shared workspace storage (development only)
  • Redis: Work queue for Arena job distribution
  • VS Code Server: Browse/edit workspace content at http://localhost:8888

Edit files in dashboard/src/ - changes sync instantly and Next.js hot-reloads. No Docker rebuilds needed!

For Go operator changes, Tilt rebuilds the image automatically (Go doesn’t support hot reload).

Press s in the Tilt UI to stream logs, or visit http://localhost:10350 for the web UI.

Terminal window
tilt down

The Tiltfile at the project root configures:

  1. Dashboard hot-reload: Uses live_update to sync source files directly into the running container. The Next.js dev server detects changes and hot-reloads.

  2. Operator rebuild: Watches Go source files and rebuilds the image when they change.

  3. Helm deployment: Deploys the chart with development-specific values from charts/omnia/values-dev.yaml.

┌─────────────────────────────────────────────────────────────┐
│ Your Editor │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ dashboard/src/ │ │ internal/ │ │
│ │ (TypeScript) │ │ (Go) │ │
│ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │
└───────────┼───────────────────────┼─────────────────────────┘
│ │
│ live_update │ docker_build
│ (instant sync) │ (rebuild image)
▼ ▼
┌─────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster (kind) │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Dashboard Pod │ │ Operator Pod │ │
│ │ (npm run dev) │────▶│ (controller) │ │
│ │ :3000 │ │ :8082 │ │
│ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ │
│ port-forward │ port-forward
▼ ▼
localhost:3000 localhost:8082

If you prefer not to use Tilt, you can set up manually:

Create a kind cluster with port forwarding:

Terminal window
cat <<EOF | kind create cluster --name omnia-dev --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30080
hostPort: 8080
protocol: TCP
EOF
Terminal window
# Build operator
make docker-build IMG=omnia-operator:dev
# Build dashboard
docker build -t omnia-dashboard:dev ./dashboard
# Load into kind
kind load docker-image omnia-operator:dev --name omnia-dev
kind load docker-image omnia-dashboard:dev --name omnia-dev
Terminal window
helm install omnia charts/omnia -n omnia-system --create-namespace \
--set image.repository=omnia-operator \
--set image.tag=dev \
--set image.pullPolicy=Never \
--set dashboard.enabled=true \
--set dashboard.image.repository=omnia-dashboard \
--set dashboard.image.tag=dev \
--set dashboard.image.pullPolicy=Never

For testing with a real local LLM (no API costs), you can enable Ollama with the llava vision model:

  • RAM: Minimum 8GB, 16GB recommended
  • Disk: ~10GB for the llava:7b model
  • CPU: 4+ cores (GPU optional but significantly faster)
Terminal window
# Using make target
make dev-ollama
# Or set environment variable
ENABLE_OLLAMA=true tilt up

This will:

  1. Deploy Ollama to the cluster
  2. Create a PersistentVolume for model caching
  3. Pull the llava:7b vision model (first run takes several minutes)
  4. Create a demo vision-capable AgentRuntime
  • From cluster: http://ollama.ollama-system:11434
  • From host: http://localhost:11434 (via port-forward)

Once deployed, you can test the ollama-vision-agent through the dashboard. It supports:

  • Text conversations
  • Image analysis (upload images for vision capabilities)
  • No API keys required

For faster inference:

  • NVIDIA GPUs: Install nvidia-docker2 and configure Docker/kind with GPU access
  • Apple Silicon: Use Docker Desktop with “Use Rosetta” disabled

For production-like demos, deploy both the main Omnia chart and the separate demos chart:

Terminal window
# Install the Omnia operator
helm install omnia charts/omnia -n omnia-system --create-namespace
# Install the demo agents (Ollama + vision/tools demos)
helm install omnia-demos charts/omnia-demos -n omnia-demo --create-namespace

This deploys Ollama with pre-configured vision and tools demo agents. The demos chart is separate from the main chart for cleaner production deployments.

For session persistence testing:

Terminal window
kubectl create namespace redis
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis -n redis \
--set auth.enabled=false \
--set architecture=standalone

Apply sample manifests to create test agents:

Terminal window
kubectl apply -f config/samples/

For local development without LLM API costs, use the demo or echo handler:

apiVersion: omnia.altairalabs.ai/v1alpha1
kind: AgentRuntime
metadata:
name: test-agent
spec:
promptPackRef:
name: test-prompts
facade:
type: websocket
handler: demo # Use 'echo' for simple connectivity testing
session:
type: memory

The demo handler provides:

  • Streaming responses that simulate real LLM output
  • Simulated tool calls for testing
  • No API key required

Ensure you’re editing files in the correct directory. Check the Tilt UI for sync status.

The dashboard runs npm run dev which uses Next.js Fast Refresh. Check the browser console for errors.

Terminal window
kubectl logs -n omnia-system deployment/omnia-controller-manager

Ensure image.pullPolicy=Never is set and the image was loaded into kind:

Terminal window
kind load docker-image <image>:<tag> --name omnia-dev

Ensure the agent service is ready:

Terminal window
kubectl get endpoints <agent-name>