Getting Started with Omnia
This tutorial walks you through deploying your first AI agent using Omnia. By the end, you’ll have a working agent accessible via WebSocket.
Prerequisites
Before you begin, ensure you have:
- A Kubernetes cluster (kind, minikube, or a cloud provider)
kubectlconfigured to access your clusterhelmv3 installed- An LLM provider API key (OpenAI, Anthropic, etc.)
Step 1: Install the Operator
Add the Omnia Helm repository and install the operator:
# Add the Helm repository
helm repo add omnia https://altairalabs.github.io/omnia/charts
helm repo update
# Create namespace and install
kubectl create namespace omnia-system
helm install omnia omnia/omnia -n omnia-system
Verify the operator is running:
kubectl get pods -n omnia-system
You should see the operator pod in a Running state.
Step 2: Create a PromptPack
A PromptPack defines the prompts your agent will use. PromptPacks follow the PromptPack specification - a structured YAML/JSON format for packaging multi-prompt conversational systems.
First, create a ConfigMap containing a compiled PromptPack:
apiVersion: v1
kind: ConfigMap
metadata:
name: assistant-prompts
namespace: default
data:
promptpack.json: |
{
"id": "assistant-pack",
"name": "AI Assistant",
"version": "1.0.0",
"template_engine": {
"version": "v1",
"syntax": "{{variable}}"
},
"prompts": {
"assistant": {
"id": "assistant",
"name": "General Assistant",
"version": "1.0.0",
"system_template": "You are a helpful AI assistant. Be concise and accurate in your responses. Always be polite and professional.",
"parameters": {
"temperature": 0.7,
"max_tokens": 1024
}
}
},
"metadata": {
"domain": "general",
"language": "en"
}
}
Then create the PromptPack resource that references the ConfigMap:
apiVersion: omnia.altairalabs.ai/v1alpha1
kind: PromptPack
metadata:
name: assistant-pack
namespace: default
spec:
source:
configMapRef:
name: assistant-prompts
key: promptpack.json
Apply both:
kubectl apply -f configmap.yaml
kubectl apply -f promptpack.yaml
Tip: You can author PromptPacks in YAML and compile them to JSON using the packc compiler for validation and optimization.
Step 3: Configure Provider Credentials
Create a Secret with your LLM provider API key:
apiVersion: v1
kind: Secret
metadata:
name: llm-credentials
namespace: default
type: Opaque
stringData:
api-key: "your-api-key-here"
kubectl apply -f secret.yaml
Step 4: Deploy the Agent
Now create an AgentRuntime to deploy your agent:
apiVersion: omnia.altairalabs.ai/v1alpha1
kind: AgentRuntime
metadata:
name: my-assistant
namespace: default
spec:
replicas: 1
provider:
name: openai
model: gpt-4
apiKeySecretRef:
name: llm-credentials
key: api-key
promptPackRef:
name: assistant-pack
facade:
type: websocket
port: 8080
kubectl apply -f agentruntime.yaml
Step 5: Verify the Deployment
Check that all resources are ready:
# Check the AgentRuntime status
kubectl get agentruntime my-assistant
# Check the pods
kubectl get pods -l app.kubernetes.io/instance=my-assistant
# Check the service
kubectl get svc my-assistant
Step 6: Connect to the Agent
Port-forward to access the agent:
kubectl port-forward svc/my-assistant 8080:8080
Now you can connect using any WebSocket client. Using websocat:
websocat ws://localhost:8080?agent=my-assistant
Send a message:
{"type": "message", "content": "Hello, who are you?"}
You’ll receive a response with the agent’s reply.
Next Steps
- Learn about PromptPack configuration
- Explore ToolRegistry to give your agent capabilities
- Read about session management for stateful conversations
Congratulations! You’ve deployed your first AI agent with Omnia.