Skip to content

Getting Started

This tutorial walks you through deploying your first AI agent using Omnia. By the end, you’ll have a working agent accessible via WebSocket.

Before you begin, ensure you have:

  • A Kubernetes cluster (kind, minikube, or a cloud provider)
  • kubectl configured to access your cluster
  • helm v3 installed
  • An LLM provider API key (OpenAI, Anthropic, etc.)

Add the Omnia Helm repository and install the operator:

Terminal window
helm repo add omnia https://altairalabs.github.io/omnia/charts
helm repo update
kubectl create namespace omnia-system
helm install omnia omnia/omnia -n omnia-system

Verify the operator is running:

Terminal window
kubectl get pods -n omnia-system

You should see the operator pod in a Running state.

A PromptPack defines the prompts your agent will use. PromptPacks follow the PromptPack specification - a structured JSON format for packaging multi-prompt conversational systems.

First, create a ConfigMap containing your compiled PromptPack JSON:

apiVersion: v1
kind: ConfigMap
metadata:
name: assistant-prompts
namespace: default
data:
# Compiled PromptPack JSON (use `packc` to compile from YAML source)
promptpack.json: |
{
"$schema": "https://promptpack.org/schema/v1/promptpack.schema.json",
"id": "assistant",
"name": "Assistant",
"version": "1.0.0",
"template_engine": {
"version": "v1",
"syntax": "{{variable}}"
},
"prompts": {
"main": {
"id": "main",
"name": "Main Assistant",
"version": "1.0.0",
"system_template": "You are a helpful AI assistant. Be concise and accurate in your responses. Always be polite and professional.",
"parameters": {
"temperature": 0.7,
"max_tokens": 4096
}
}
}
}

Then create the PromptPack resource that references the ConfigMap:

apiVersion: omnia.altairalabs.ai/v1alpha1
kind: PromptPack
metadata:
name: assistant-pack
namespace: default
spec:
version: "1.0.0"
rollout:
type: immediate
source:
configMapRef:
name: assistant-prompts

Apply both:

Terminal window
kubectl apply -f configmap.yaml
kubectl apply -f promptpack.yaml

Verify the PromptPack is ready:

Terminal window
kubectl get promptpack assistant-pack

Tip: Author PromptPacks in YAML and compile them to JSON using packc for validation and optimization:

Terminal window
packc compile prompts.yaml -o promptpack.json

Create a Secret with your LLM provider API key, then create a Provider resource:

apiVersion: v1
kind: Secret
metadata:
name: llm-credentials
namespace: default
type: Opaque
stringData:
ANTHROPIC_API_KEY: "sk-ant-..." # Or OPENAI_API_KEY for OpenAI
---
apiVersion: omnia.altairalabs.ai/v1alpha1
kind: Provider
metadata:
name: my-provider
namespace: default
spec:
type: claude # Or "openai", "gemini"
model: claude-sonnet-4-20250514
secretRef:
name: llm-credentials
Terminal window
kubectl apply -f provider.yaml

Verify the Provider is ready:

Terminal window
kubectl get provider my-provider
# Should show: my-provider claude claude-sonnet-4-20250514 Ready ...

Tip: Don’t have an API key yet? Use handler: demo in your AgentRuntime to test with simulated responses. See Handler Modes for details.

Now create an AgentRuntime to deploy your agent:

apiVersion: omnia.altairalabs.ai/v1alpha1
kind: AgentRuntime
metadata:
name: my-assistant
namespace: default
spec:
promptPackRef:
name: assistant-pack
providerRef:
name: my-provider
facade:
type: websocket
port: 8080
handler: demo # Use "demo" for testing without an API key
session:
type: memory
ttl: "1h"

Note: The handler: demo setting provides simulated streaming responses for testing. For production with a real LLM, change to handler: runtime (the default).

Terminal window
kubectl apply -f agentruntime.yaml

Check that all resources are ready:

Terminal window
# Check the AgentRuntime status
kubectl get agentruntime my-assistant
# Check the pods
kubectl get pods -l app.kubernetes.io/instance=my-assistant
# Check the service
kubectl get svc my-assistant

Port-forward to access the agent:

Terminal window
kubectl port-forward svc/my-assistant 8080:8080

Now you can connect using any WebSocket client. Using websocat:

Terminal window
# Interactive mode - type messages directly
websocat "ws://localhost:8080/ws?agent=my-assistant"

Send a JSON message (the ?agent= parameter is required):

{"type": "message", "content": "Hello, who are you?"}

You should see responses like:

{"type":"connected","session_id":"abc123...","timestamp":"..."}
{"type":"chunk","session_id":"abc123...","content":"Hello","timestamp":"..."}
{"type":"chunk","session_id":"abc123...","content":"!","timestamp":"..."}
{"type":"done","session_id":"abc123...","content":"","timestamp":"..."}

Tip: To send a single test message programmatically:

Terminal window
echo '{"type":"message","content":"Hello!"}' | websocat "ws://localhost:8080/ws?agent=my-assistant"

Congratulations! You’ve deployed your first AI agent with Omnia.