Getting Started
This tutorial walks you through deploying your first AI agent using Omnia. By the end, you’ll have a working agent accessible via WebSocket.
Prerequisites
Section titled “Prerequisites”Before you begin, ensure you have:
- A Kubernetes cluster (kind, minikube, or a cloud provider)
kubectlconfigured to access your clusterhelmv3 installed- An LLM provider API key (OpenAI, Anthropic, etc.)
Step 1: Install the Operator
Section titled “Step 1: Install the Operator”Add the Omnia Helm repository and install the operator:
helm repo add omnia https://altairalabs.github.io/omnia/chartshelm repo update
kubectl create namespace omnia-systemhelm install omnia omnia/omnia -n omnia-systemVerify the operator is running:
kubectl get pods -n omnia-systemYou should see the operator pod in a Running state.
Step 2: Create a PromptPack
Section titled “Step 2: Create a PromptPack”A PromptPack defines the prompts your agent will use. PromptPacks follow the PromptPack specification - a structured JSON format for packaging multi-prompt conversational systems.
First, create a ConfigMap containing your compiled PromptPack JSON:
apiVersion: v1kind: ConfigMapmetadata: name: assistant-prompts namespace: defaultdata: # Compiled PromptPack JSON (use `packc` to compile from YAML source) promptpack.json: | { "$schema": "https://promptpack.org/schema/v1/promptpack.schema.json", "id": "assistant", "name": "Assistant", "version": "1.0.0", "template_engine": { "version": "v1", "syntax": "{{variable}}" }, "prompts": { "main": { "id": "main", "name": "Main Assistant", "version": "1.0.0", "system_template": "You are a helpful AI assistant. Be concise and accurate in your responses. Always be polite and professional.", "parameters": { "temperature": 0.7, "max_tokens": 4096 } } } }Then create the PromptPack resource that references the ConfigMap:
apiVersion: omnia.altairalabs.ai/v1alpha1kind: PromptPackmetadata: name: assistant-pack namespace: defaultspec: version: "1.0.0" rollout: type: immediate source: configMapRef: name: assistant-promptsApply both:
kubectl apply -f configmap.yamlkubectl apply -f promptpack.yamlVerify the PromptPack is ready:
kubectl get promptpack assistant-packTip: Author PromptPacks in YAML and compile them to JSON using packc for validation and optimization:
Terminal window packc compile prompts.yaml -o promptpack.json
Step 3: Configure the LLM Provider
Section titled “Step 3: Configure the LLM Provider”Create a Secret with your LLM provider API key, then create a Provider resource:
apiVersion: v1kind: Secretmetadata: name: llm-credentials namespace: defaulttype: OpaquestringData: ANTHROPIC_API_KEY: "sk-ant-..." # Or OPENAI_API_KEY for OpenAI---apiVersion: omnia.altairalabs.ai/v1alpha1kind: Providermetadata: name: my-provider namespace: defaultspec: type: claude # Or "openai", "gemini" model: claude-sonnet-4-20250514 secretRef: name: llm-credentialskubectl apply -f provider.yamlVerify the Provider is ready:
kubectl get provider my-provider# Should show: my-provider claude claude-sonnet-4-20250514 Ready ...Tip: Don’t have an API key yet? Use
handler: demoin your AgentRuntime to test with simulated responses. See Handler Modes for details.
Step 4: Deploy the Agent
Section titled “Step 4: Deploy the Agent”Now create an AgentRuntime to deploy your agent:
apiVersion: omnia.altairalabs.ai/v1alpha1kind: AgentRuntimemetadata: name: my-assistant namespace: defaultspec: promptPackRef: name: assistant-pack providerRef: name: my-provider facade: type: websocket port: 8080 handler: demo # Use "demo" for testing without an API key session: type: memory ttl: "1h"Note: The
handler: demosetting provides simulated streaming responses for testing. For production with a real LLM, change tohandler: runtime(the default).
kubectl apply -f agentruntime.yamlStep 5: Verify the Deployment
Section titled “Step 5: Verify the Deployment”Check that all resources are ready:
# Check the AgentRuntime statuskubectl get agentruntime my-assistant
# Check the podskubectl get pods -l app.kubernetes.io/instance=my-assistant
# Check the servicekubectl get svc my-assistantStep 6: Connect to the Agent
Section titled “Step 6: Connect to the Agent”Port-forward to access the agent:
kubectl port-forward svc/my-assistant 8080:8080Now you can connect using any WebSocket client. Using websocat:
# Interactive mode - type messages directlywebsocat "ws://localhost:8080/ws?agent=my-assistant"Send a JSON message (the ?agent= parameter is required):
{"type": "message", "content": "Hello, who are you?"}You should see responses like:
{"type":"connected","session_id":"abc123...","timestamp":"..."}{"type":"chunk","session_id":"abc123...","content":"Hello","timestamp":"..."}{"type":"chunk","session_id":"abc123...","content":"!","timestamp":"..."}{"type":"done","session_id":"abc123...","content":"","timestamp":"..."}Tip: To send a single test message programmatically:
Terminal window echo '{"type":"message","content":"Hello!"}' | websocat "ws://localhost:8080/ws?agent=my-assistant"
Next Steps
Section titled “Next Steps”- Learn about Provider configuration for LLM settings
- Explore ToolRegistry to give your agent capabilities
- Read about session management for stateful conversations
- Set up observability for monitoring
- Configure autoscaling for production workloads
Congratulations! You’ve deployed your first AI agent with Omnia.