Skip to content

Deploy on KIND and Kubernetes

This guide walks you through deploying the Open Operator Platform (OOP) on a local KIND (Kubernetes in Docker) cluster using the Helm charts in the repository. The same umbrella chart can be used on other Kubernetes clusters. The kind environment is used here for a reproducible local setup.

Prerequisites

Software

Tool Minimum version Notes
Docker 20.x KIND runs Kubernetes nodes in Docker containers
kind 0.20 Install kind
kubectl 1.25 Install kubectl
Helm v3 Install Helm
Bash 4+ Required by the bootstrap and deploy scripts

Verify your installations:

docker --version
kind --version
kubectl version --client
helm version

Hardware

For a comfortable local run of the full platform:

Resource Minimum Recommended
CPU 4 cores 6+ cores
RAM 12 GiB 16 GiB
Disk 20 GiB free 30+ GiB free

Repository

You need the OOP repository with the helm directory (umbrella chart and scripts). All deploy commands below are run from the helm directory (the one that contains deploy-on-kind.sh:

cd /path/to/oop/helm
ls deploy-on-kind.sh oop-platform-chart environments/kind

Values file order

Helm merges values in order. Later files override earlier ones, thus, for the kind deployment the preceding order is:

  1. helm/oop-platform-chart/values.yaml — base defaults (images, resources, ClusterIPs, no hostPath).
  2. helm/environments/kind/values.yaml — kind-specific overrides (NodePorts, hostPath persistence, storageClass, image pullPolicy, config).
  3. helm/environments/kind/secrets.values.yaml — used for secrets (e.g. ai2.secrets.groqApiKey). If absent, the deploy script continues but AI2 will have an empty Groq key.

So: base → kind values → secrets. Anything in environments/kind/values.yaml overrides the base chart, anything in secrets.values.yaml overrides both.

Important parameters (kind values)

These are the main overrides in environments/kind/values.yaml that you might tune:

Section Parameter Purpose
SRM srm.srmcontroller.service.nodePort NodePort for SRM (default 32415)
srm.srmcontroller.env.networkAdapterName Network adapter name (e.g. oai)
OEG oeg.oegcontroller.service.nodePort NodePort for OEG API (default 32263)
Federation Manager federationManager.federationManager.service.nodePort NodePort for FM (default 30989)
federationManager.federationManager.config.partner_op Partner OP host/port for federation
federationManager.federationManager.config.edgeCloudPlatform Edge cloud (lite2edge) host/port; prerequisite for testing federation (see below)
federationManager.keycloak.service.nodePort Keycloak NodePort (default 30081)
AI2 ai2.service.mcp.nodePort / ai2.service.aiAgent.nodePort MCP and AI Agent NodePorts
ai2.secrets.groqApiKey Set via secrets.values.yaml (required for AI2)
Portal portal.service.nodePort Portal UI (default 30082)

Changing NodePorts requires re-bootstrapping the cluster

Each NodePort in values.yaml must match the corresponding containerPort entry in environments/kind/cluster.yaml. If you change a NodePort, update both files and re-run ./deploy-on-kind.sh — the KIND cluster must be recreated for the new port mapping to take effect.

Edge cloud platform (lite2edge)

Only required for federation testing

You can skip this section entirely if you are not testing federation.

The chart is configured to use lite2edge via federationManager.config.edgeCloudPlatform:

  • host — default lite2edge.lite2edge.svc.cluster.local (Kubernetes service in namespace lite2edge)
  • port — default 80
  • client_name — default lite2edge
  • flavour_id — default default (override with your lite2edge flavour UUID if required)

Lite2edge is not deployed by this platform chart. To test federation you must deploy lite2edge separately (e.g. in namespace lite2edge). If lite2edge runs elsewhere, override in your values:

federationManager:
  federationManager:
    config:
      edgeCloudPlatform:
        host: "your-lite2edge-host"
        port: "80"
        client_name: "lite2edge"
        flavour_id: "your-flavour-id"

Deploy

From the helm directory, run the single script that does everything — bootstraps the cluster, configures networking, and installs all components via Helm:

cd /path/to/oop/helm
./deploy-on-kind.sh

After a few minutes, all components should be scheduled and starting.

Add a Groq API key for AI2 (required only for this component)

AI2 requires a Groq API key for LLM calls. Create environments/kind/secrets.values.yaml in the helm directory before running the deploy script:

ai2:
  secrets:
    groqApiKey: "<your-groq-api-key>"

If the file is absent the deploy script will warn and continue, but AI2 will not be able to make LLM calls.

Verify deployment

Check pods and services

kubectl get pods -n oop
kubectl get svc -n oop

All pods should reach Running. If any stay Pending or CrashLoopBackOff, use the debugging commands below.

Pod prefix Component
srmcontroller Service Resource Manager
artefact-manager Artefact Manager
oegcontroller Open Exposure Gateway
oegmongo OEG MongoDB
mongosrm SRM MongoDB
mongodb Federation Manager MongoDB
federation-manager Federation Manager
keycloak Keycloak
ai2-mcp AI2 MCP
ai2-ai-agent AI2 AI Agent
portal Portal

Access URLs (via localhost)

The kind config maps NodePorts to the host. After deployment:

Service URL
OEG API (Swagger) http://localhost:32263/oeg/1.0.0/docs/
SRM http://localhost:32415
Artefact Manager http://localhost:30080
Federation Manager http://localhost:30989
Keycloak http://localhost:30081
Keycloak Admin http://localhost:30081/admin
AI2 MCP http://localhost:32004
AI2 AI Agent http://localhost:32013
Portal http://localhost:30082 (username: oop, password: oop)

Quick smoke test:

curl -s http://localhost:32263/oeg/1.0.0/edge-cloud-zones | head -200

You should get a JSON array of edge cloud zones (or an empty array).

What the scripts do

If you want to understand or run the steps individually:

Bootstrap the KIND cluster

./scripts/kind-bootstrap.sh

This script:

  1. Checks that kind and kubectl are installed.
  2. Creates host storage directories: /tmp/kind-oop/mongodb_srm, mongodb_oeg, mongodb_fm.
  3. Creates the KIND cluster oop-cluster from environments/kind/cluster.yaml (port mappings and host mounts).
  4. Creates namespace oop, service account oop-user and a cluster-admin binding for that account.
  5. Waits for nodes to be Ready.

If the cluster already exists, the script asks whether to delete and recreate. To switch context manually:

kubectl config use-context kind-oop-cluster

Deploy with Helm

./scripts/helm-deploy.sh

This script:

  1. Checks for environments/kind/secrets.values.yaml and adds it to the Helm command if present.
  2. Generates a short-lived token for the oop-user service account (used by SRM to talk to the Kubernetes API).
  3. Runs:
helm upgrade --install oop-platform ./oop-platform-chart \
  -n oop \
  --create-namespace \
  -f environments/kind/values.yaml \
  -f environments/kind/secrets.values.yaml \   # if file exists
  --set srm.srmcontroller.env.kubernetesMasterToken="<token>" \
  --set federationManager.enabled=true

Debugging and checking

Pod not starting or crashing

# List pods and status
kubectl get pods -n oop

# Describe a pod (events, image pull, resource limits)
kubectl describe pod <pod-name> -n oop

# Logs (current)
kubectl logs <pod-name> -n oop

# Logs (follow)
kubectl logs -f <pod-name> -n oop

# Previous container log (after crash)
kubectl logs <pod-name> -n oop --previous

Example for the OEG controller:

kubectl logs deployment/oegcontroller -n oop -f

Helm release status

helm status oop-platform -n oop
helm list -n oop

Node and storage

kubectl get nodes
kubectl get pv
kubectl get pvc -n oop

If MongoDB PVCs stay Pending, check that the host paths exist and that the kind node has the correct mounts (see cluster.yaml and bootstrap script).

SRM ↔ Kubernetes token

SRM needs a token to create resources in the cluster. The deploy script sets srm.srmcontroller.env.kubernetesMasterToken with a token from oop-user. If you upgrade manually, generate a new token:

kubectl -n oop create token oop-user --duration=720h

Then (from the helm directory):

helm upgrade oop-platform ./oop-platform-chart -n oop \
  -f environments/kind/values.yaml \
  --set srm.srmcontroller.env.kubernetesMasterToken="<paste-token-here>"

Image pull issues

If images are from a private registry, ensure imagePullSecrets or node configuration is set. For kind, the base and kind values use pullPolicy: IfNotPresent so locally built images can be used after loading:

kind load docker-image <your-image> --name oop-cluster

Upgrade and re-deploy

To upgrade after changing values or charts (from the helm directory):

TOKEN=$(kubectl -n oop create token oop-user --duration=720h)
helm upgrade oop-platform ./oop-platform-chart -n oop \
  -f environments/kind/values.yaml \
  -f environments/kind/secrets.values.yaml \
  --set srm.srmcontroller.env.kubernetesMasterToken="$TOKEN" \
  --set federationManager.enabled=true

Cleanup

Uninstall the release and delete the cluster:

helm uninstall oop-platform -n oop
kind delete cluster --name oop-cluster

Optional: remove host storage:

sudo rm -rf /tmp/kind-oop

Deploy on an existing Kubernetes cluster

If you already have a Kubernetes cluster (e.g. on-premises, a managed cloud service such as EKS/GKE/AKS, or any other Kubernetes distribution) you can skip the KIND-specific steps entirely and install OOP directly with Helm.

Prerequisites

You need kubectl pointing at your cluster and helm v3 installed. The kind tool and deploy-on-kind.sh script are not required.

1. Create the namespace, service account and RBAC

SRM needs a service account with cluster-admin rights to manage Kubernetes resources. Run these once against your cluster:

kubectl create namespace oop
kubectl create serviceaccount oop-user -n oop
kubectl create clusterrolebinding oop-user-binding \
  --clusterrole=cluster-admin \
  --serviceaccount=oop:oop-user

2. Create a custom values file

The environments/kind/values.yaml file is KIND-specific (hostPath volumes, manual StorageClass, NodePort services). For a real cluster you need your own values file. Create, for example, environments/my-cluster/values.yaml:

srm:
  mongodb:
    persistence:
      storageClass: "standard"   # replace with your cluster's StorageClass
  srmcontroller:
    service:
      type: LoadBalancer          # or ClusterIP + Ingress
    env:
      networkAdapterName: "oai"   # set to your network adapter name

oeg:
  mongodb:
    persistence:
      storageClass: "standard"
  oegcontroller:
    service:
      type: LoadBalancer

federationManager:
  federationManager:
    config:
      partner_op:
        host: ""      # set if using federation
        port: ""
    service:
      type: LoadBalancer
  mongodb:
    persistence:
      storageClass: "standard"
  keycloak:
    service:
      type: LoadBalancer

ai2:
  service:
    mcp:
      type: LoadBalancer
    aiAgent:
      type: LoadBalancer
  secrets:
    groqApiKey: ""  # set via secrets file (see below)

portal:
  service:
    type: LoadBalancer

StorageClass

Run kubectl get storageclass to list available StorageClasses in your cluster. Replace "standard" with the appropriate name.

For the Groq API key, create a separate secrets file (e.g. environments/my-cluster/secrets.values.yaml) and keep it out of version control:

ai2:
  secrets:
    groqApiKey: "<your-groq-api-key>"

3. Generate the SRM token and install

From the helm directory:

cd /path/to/oop/helm

# Generate a long-lived token for the SRM service account
TOKEN=$(kubectl -n oop create token oop-user --duration=720h)

# Install (or upgrade) the platform
helm upgrade --install oop-platform ./oop-platform-chart \
  -n oop \
  --create-namespace \
  -f environments/my-cluster/values.yaml \
  -f environments/my-cluster/secrets.values.yaml \
  --set srm.srmcontroller.env.kubernetesMasterToken="$TOKEN" \
  --set federationManager.enabled=true

4. Verify

kubectl get pods -n oop
kubectl get svc -n oop

Once pods are Running, use the external IPs or hostnames assigned to the LoadBalancer services (visible in kubectl get svc -n oop) to access each component. The port assignments and component names are the same as described in the Verify deployment section above.

See also

  • Using the platform — quickstart and workflows once the platform is running.
  • Helm README: README.md in the helm directory of the repository.