Skip to content

Workspace CRD

The Workspace custom resource defines a multi-tenant workspace with isolated namespace, RBAC, and resource quotas in Kubernetes.

apiVersion: omnia.altairalabs.ai/v1alpha1
kind: Workspace

Workspace is a cluster-scoped resource. It creates and manages resources in its associated namespace.

Human-readable name for the workspace shown in the dashboard.

FieldTypeRequired
displayNamestringYes
spec:
displayName: "Customer Support Team"

Optional description of the workspace.

FieldTypeRequired
descriptionstringNo
spec:
description: "Team responsible for customer support AI agents"

Environment tier for the workspace. Enables environment-based workflows and policies.

FieldTypeDefaultRequired
environmentstringdevelopmentNo

Environment types:

ValueDescription
developmentDevelopment workspaces for testing and iteration
stagingStaging environment for pre-production testing
productionProduction workspaces with stricter controls
spec:
environment: production

Labels applied to all resources created in this workspace. Used for cost attribution and resource organization.

FieldTypeRequired
defaultTagsmap[string]stringNo
spec:
defaultTags:
team: "customer-support"
cost-center: "CC-1234"
business-unit: "support-ops"

Kubernetes namespace configuration for the workspace.

FieldTypeRequired
namespace.namestringYes
namespace.createbooleanNo (default: false)
namespace.labelsmap[string]stringNo
namespace.annotationsmap[string]stringNo
spec:
namespace:
name: omnia-customer-support
create: true
labels:
environment: production
annotations:
cost-center: "cc-12345"

Maps IdP groups and ServiceAccounts to workspace roles. This is the primary mechanism for access control.

FieldTypeRequired
roleBindings[].groups[]stringNo
roleBindings[].serviceAccounts[]ServiceAccountRefNo
roleBindings[].rolestringYes

ServiceAccountRef:

FieldTypeRequired
namestringYes
namespacestringYes

Available roles:

RoleDescription
ownerFull workspace control including member management
editorCreate/modify resources but cannot manage members
viewerRead-only access to resources
spec:
roleBindings:
# Map IdP groups to roles
- groups:
- "omnia-admins@acme.com"
role: owner
- groups:
- "omnia-engineers@acme.com"
- "engineering-team"
role: editor
- groups:
- "contractors@acme.com"
role: viewer
# Grant access to ServiceAccounts for CI/CD
- serviceAccounts:
- name: github-actions
namespace: ci-system
- name: argocd-application-controller
namespace: argocd
role: editor

Direct user grants for exceptions. Use sparingly - prefer groups for scalability.

FieldTypeRequired
directGrants[].userstringYes
directGrants[].rolestringYes
directGrants[].expiresstring (RFC3339)No
spec:
directGrants:
- user: emergency-admin@acme.com
role: owner
expires: "2026-02-01T00:00:00Z" # Temporary access

Configures access for unauthenticated users.

FieldTypeRequired
anonymousAccess.enabledbooleanYes
anonymousAccess.rolestringNo (default: viewer)

Warning: Granting editor or owner access allows anonymous users to modify resources. Only use in isolated development environments.

spec:
anonymousAccess:
enabled: true
role: viewer # Read-only for anonymous users

Resource quotas for the workspace.

Standard Kubernetes compute resource quotas.

FieldTypeDescription
requests.cpustringTotal CPU requests (e.g., “50”)
requests.memorystringTotal memory requests (e.g., “100Gi”)
limits.cpustringTotal CPU limits (e.g., “100”)
limits.memorystringTotal memory limits (e.g., “200Gi”)
spec:
quotas:
compute:
requests.cpu: "50"
requests.memory: "100Gi"
limits.cpu: "100"
limits.memory: "200Gi"

Object count quotas.

FieldTypeDescription
configmapsintegerMaximum number of ConfigMaps
secretsintegerMaximum number of Secrets
persistentvolumeclaimsintegerMaximum number of PVCs
spec:
quotas:
objects:
configmaps: 100
secrets: 50
persistentvolumeclaims: 20

Arena-specific quotas.

FieldTypeDescription
maxConcurrentJobsintegerMaximum concurrent Arena jobs
maxJobsPerDayintegerMaximum Arena jobs per day
maxWorkersPerJobintegerMaximum workers per Arena job
spec:
quotas:
arena:
maxConcurrentJobs: 10
maxJobsPerDay: 100
maxWorkersPerJob: 50

AgentRuntime-specific quotas.

FieldTypeDescription
maxAgentRuntimesintegerMaximum number of AgentRuntimes
maxReplicasPerAgentintegerMaximum replicas per AgentRuntime
spec:
quotas:
agents:
maxAgentRuntimes: 20
maxReplicasPerAgent: 10

Network isolation settings for the workspace. When enabled, automatically generates a Kubernetes NetworkPolicy to restrict traffic.

FieldTypeDefaultRequired
networkPolicy.isolatebooleanfalseNo
networkPolicy.allowExternalAPIsbooleantrueNo
networkPolicy.allowSharedNamespacesbooleantrueNo
networkPolicy.allowPrivateNetworksbooleanfalseNo
networkPolicy.allowFrom[]NetworkPolicyRule[]No
networkPolicy.allowTo[]NetworkPolicyRule[]No
  • DNS: Always allows egress to kube-system on port 53 (UDP/TCP)
  • Same namespace: Allows all ingress/egress within the workspace namespace
  • Shared namespaces: Allows ingress/egress to namespaces labeled omnia.altairalabs.ai/shared: true
  • External APIs: Allows egress to 0.0.0.0/0 excluding RFC 1918 private IP ranges:
    • 10.0.0.0/8 - Class A private network
    • 172.16.0.0/12 - Class B private networks
    • 192.168.0.0/16 - Class C private networks

This allows agents to reach external LLM APIs while blocking access to other tenants’ pods and internal cluster services.

spec:
networkPolicy:
isolate: true

This creates a NetworkPolicy named workspace-{name}-isolation with default rules.

Disable external API access (blocks internet egress except DNS):

spec:
networkPolicy:
isolate: true
allowExternalAPIs: false

Allow Private Networks (Local Development)

Section titled “Allow Private Networks (Local Development)”

For local development or when agents need to access services on private networks (e.g., local LLM servers), enable allowPrivateNetworks to remove the RFC 1918 exclusions:

spec:
networkPolicy:
isolate: true
allowPrivateNetworks: true # Allows 10.x, 172.16.x, 192.168.x

Allow traffic from specific namespaces (e.g., ingress controller):

spec:
networkPolicy:
isolate: true
allowFrom:
- peers:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx

Allow egress to internal databases:

spec:
networkPolicy:
isolate: true
allowTo:
- peers:
- ipBlock:
cidr: 10.0.0.0/8 # Internal network
ports:
- protocol: TCP
port: 5432 # PostgreSQL
- protocol: TCP
port: 6379 # Redis
FieldTypeDescription
peers[]NetworkPolicyPeerSources (ingress) or destinations (egress)
ports[]NetworkPolicyPortPorts to allow (optional, all ports if omitted)
FieldTypeDescription
namespaceSelector.matchLabelsmap[string]stringSelect namespaces by label
podSelector.matchLabelsmap[string]stringSelect pods by label
ipBlock.cidrstringIP block in CIDR notation
ipBlock.except[]stringCIDRs to exclude from the block
FieldTypeDescription
protocolstringTCP, UDP, or SCTP (default: TCP)
portintegerPort number

Budget and cost control settings for the workspace.

FieldTypeDefaultRequired
costControls.dailyBudgetstring-No
costControls.monthlyBudgetstring-No
costControls.budgetExceededActionstringwarnNo
costControls.alertThresholds[]CostAlertThreshold[]No

Budget values are in USD (e.g., “100.00”, “2000.00”).

ValueDescription
warnLog warnings when budget is exceeded
pauseJobsPause Arena jobs when budget is exceeded
blockBlock new API requests when budget is exceeded
spec:
costControls:
dailyBudget: "100.00"
monthlyBudget: "2000.00"
budgetExceededAction: pauseJobs
alertThresholds:
- percent: 80
notify:
- "team-lead@acme.com"
- percent: 95
notify:
- "team-lead@acme.com"
- "finance@acme.com"

Current lifecycle phase of the Workspace.

ValueDescription
PendingWorkspace is being set up
ReadyWorkspace is ready for use
SuspendedWorkspace is suspended
ErrorWorkspace has an error

Most recent generation observed by the controller.

Namespace status information.

FieldDescription
status.namespace.nameName of the created namespace
status.namespace.createdWhether namespace was created by controller

ServiceAccounts created for this workspace.

FieldDescription
status.serviceAccounts.ownerName of the owner ServiceAccount
status.serviceAccounts.editorName of the editor ServiceAccount
status.serviceAccounts.viewerName of the viewer ServiceAccount

Member count by role.

FieldDescription
status.members.ownersCount of owner members
status.members.editorsCount of editor members
status.members.viewersCount of viewer members

NetworkPolicy status information.

FieldDescription
status.networkPolicy.nameName of the generated NetworkPolicy
status.networkPolicy.enabledWhether network isolation is active
status.networkPolicy.rulesCountTotal number of ingress and egress rules

Current cost tracking information.

FieldDescription
status.costUsage.dailySpendCurrent day’s spending in USD
status.costUsage.dailyBudgetConfigured daily budget in USD
status.costUsage.monthlySpendCurrent month’s spending in USD
status.costUsage.monthlyBudgetConfigured monthly budget in USD
status.costUsage.lastUpdatedTimestamp of last cost calculation
TypeDescription
ReadyOverall workspace readiness
NamespaceReadyNamespace is created and configured
ServiceAccountsReadyServiceAccounts are created
RoleBindingsReadyRBAC resources are configured
NetworkPolicyReadyNetworkPolicy is configured (if enabled)
apiVersion: omnia.altairalabs.ai/v1alpha1
kind: Workspace
metadata:
name: customer-support
spec:
displayName: "Customer Support Team"
description: "Team responsible for customer support AI agents"
environment: production
defaultTags:
team: "customer-support"
cost-center: "CC-1234"
namespace:
name: omnia-customer-support
create: true
labels:
environment: production
roleBindings:
# Owners: Full workspace control
- groups:
- "omnia-cs-admins@acme.com"
role: owner
# Editors: Create/modify resources
- groups:
- "omnia-cs-engineers@acme.com"
role: editor
# Viewers: Read-only
- groups:
- "omnia-cs-contractors@acme.com"
role: viewer
# CI/CD ServiceAccounts
- serviceAccounts:
- name: argocd-application-controller
namespace: argocd
role: editor
quotas:
compute:
requests.cpu: "50"
requests.memory: "100Gi"
limits.cpu: "100"
limits.memory: "200Gi"
objects:
configmaps: 100
secrets: 50
persistentvolumeclaims: 20
arena:
maxConcurrentJobs: 10
maxJobsPerDay: 100
maxWorkersPerJob: 50
agents:
maxAgentRuntimes: 20
maxReplicasPerAgent: 10
# Network isolation
networkPolicy:
isolate: true
allowFrom:
- peers:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
allowTo:
- peers:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- protocol: TCP
port: 5432
# Cost controls
costControls:
monthlyBudget: "5000.00"
budgetExceededAction: warn
alertThresholds:
- percent: 80
notify:
- "cs-admins@acme.com"

The dashboard provides a workspace switcher for managing multiple workspaces. Users see only the workspaces they have access to, based on their IdP group membership.

The dashboard uses workspace-scoped API endpoints:

EndpointDescription
GET /api/workspacesList accessible workspaces
GET /api/workspaces/{name}Get workspace details
GET /api/workspaces/{name}/agentsList agents in workspace
GET /api/workspaces/{name}/promptpacksList prompt packs in workspace
GET /api/workspaces/{name}/agents/{agentName}/logsGet agent logs
GET /api/workspaces/{name}/agents/{agentName}/eventsGet agent events

The dashboard manages ServiceAccount tokens for workspace-scoped K8s API access. Each workspace has three ServiceAccounts (owner, editor, viewer) with corresponding RBAC permissions. The dashboard fetches short-lived tokens to make API calls with the appropriate permission level.

  1. User authenticates via OIDC (Okta, Azure AD, Google)
  2. JWT contains claims: { email, groups: ["group1", "group2", ...] }
  3. Dashboard checks which groups are in workspace roleBindings
  4. Grants highest privilege role found
  5. Makes K8s API calls using workspace ServiceAccount token

This design keeps the Workspace CRD small (10-20 groups) even with 10,000+ users. User management happens in your IdP, not in Kubernetes.