Deploy on KIND and Kubernetes¶
This guide walks you through deploying the Open Operator Platform (OOP) on a local KIND (Kubernetes in Docker) cluster using the Helm charts in the repository. The same umbrella chart can be used on other Kubernetes clusters. The kind environment is used here for a reproducible local setup.
Prerequisites¶
Software¶
| Tool | Minimum version | Notes |
|---|---|---|
| Docker | 20.x | KIND runs Kubernetes nodes in Docker containers |
| kind | 0.20 | Install kind |
| kubectl | 1.25 | Install kubectl |
| Helm | v3 | Install Helm |
| Bash | 4+ | Required by the bootstrap and deploy scripts |
Verify your installations:
Hardware¶
For a comfortable local run of the full platform:
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 6+ cores |
| RAM | 12 GiB | 16 GiB |
| Disk | 20 GiB free | 30+ GiB free |
Repository¶
You need the OOP repository with the helm directory (umbrella chart and scripts). All deploy commands below are run from the helm directory (the one that contains deploy-on-kind.sh:
Values file order¶
Helm merges values in order. Later files override earlier ones, thus, for the kind deployment the preceding order is:
helm/oop-platform-chart/values.yaml— base defaults (images, resources, ClusterIPs, no hostPath).helm/environments/kind/values.yaml— kind-specific overrides (NodePorts, hostPath persistence, storageClass, image pullPolicy, config).helm/environments/kind/secrets.values.yaml— used for secrets (e.g.ai2.secrets.groqApiKey). If absent, the deploy script continues but AI2 will have an empty Groq key.
So: base → kind values → secrets. Anything in environments/kind/values.yaml overrides the base chart, anything in secrets.values.yaml overrides both.
Important parameters (kind values)¶
These are the main overrides in environments/kind/values.yaml that you might tune:
| Section | Parameter | Purpose |
|---|---|---|
| SRM | srm.srmcontroller.service.nodePort |
NodePort for SRM (default 32415) |
srm.srmcontroller.env.networkAdapterName |
Network adapter name (e.g. oai) |
|
| OEG | oeg.oegcontroller.service.nodePort |
NodePort for OEG API (default 32263) |
| Federation Manager | federationManager.federationManager.service.nodePort |
NodePort for FM (default 30989) |
federationManager.federationManager.config.partner_op |
Partner OP host/port for federation | |
federationManager.federationManager.config.edgeCloudPlatform |
Edge cloud (lite2edge) host/port; prerequisite for testing federation (see below) | |
federationManager.keycloak.service.nodePort |
Keycloak NodePort (default 30081) |
|
| AI2 | ai2.service.mcp.nodePort / ai2.service.aiAgent.nodePort |
MCP and AI Agent NodePorts |
ai2.secrets.groqApiKey |
Set via secrets.values.yaml (required for AI2) |
|
| Portal | portal.service.nodePort |
Portal UI (default 30082) |
Changing NodePorts requires re-bootstrapping the cluster
Each NodePort in values.yaml must match the corresponding containerPort entry in environments/kind/cluster.yaml. If you change a NodePort, update both files and re-run ./deploy-on-kind.sh — the KIND cluster must be recreated for the new port mapping to take effect.
Edge cloud platform (lite2edge)¶
Only required for federation testing
You can skip this section entirely if you are not testing federation.
The chart is configured to use lite2edge via federationManager.config.edgeCloudPlatform:
- host — default
lite2edge.lite2edge.svc.cluster.local(Kubernetes service in namespacelite2edge) - port — default
80 - client_name — default
lite2edge - flavour_id — default
default(override with your lite2edge flavour UUID if required)
Lite2edge is not deployed by this platform chart. To test federation you must deploy lite2edge separately (e.g. in namespace lite2edge). If lite2edge runs elsewhere, override in your values:
federationManager:
federationManager:
config:
edgeCloudPlatform:
host: "your-lite2edge-host"
port: "80"
client_name: "lite2edge"
flavour_id: "your-flavour-id"
Deploy¶
From the helm directory, run the single script that does everything — bootstraps the cluster, configures networking, and installs all components via Helm:
After a few minutes, all components should be scheduled and starting.
Add a Groq API key for AI2 (required only for this component)¶
AI2 requires a Groq API key for LLM calls. Create environments/kind/secrets.values.yaml in the helm directory before running the deploy script:
If the file is absent the deploy script will warn and continue, but AI2 will not be able to make LLM calls.
Verify deployment¶
Check pods and services¶
All pods should reach Running. If any stay Pending or CrashLoopBackOff, use the debugging commands below.
| Pod prefix | Component |
|---|---|
srmcontroller |
Service Resource Manager |
artefact-manager |
Artefact Manager |
oegcontroller |
Open Exposure Gateway |
oegmongo |
OEG MongoDB |
mongosrm |
SRM MongoDB |
mongodb |
Federation Manager MongoDB |
federation-manager |
Federation Manager |
keycloak |
Keycloak |
ai2-mcp |
AI2 MCP |
ai2-ai-agent |
AI2 AI Agent |
portal |
Portal |
Access URLs (via localhost)¶
The kind config maps NodePorts to the host. After deployment:
| Service | URL |
|---|---|
| OEG API (Swagger) | http://localhost:32263/oeg/1.0.0/docs/ |
| SRM | http://localhost:32415 |
| Artefact Manager | http://localhost:30080 |
| Federation Manager | http://localhost:30989 |
| Keycloak | http://localhost:30081 |
| Keycloak Admin | http://localhost:30081/admin |
| AI2 MCP | http://localhost:32004 |
| AI2 AI Agent | http://localhost:32013 |
| Portal | http://localhost:30082 (username: oop, password: oop) |
Quick smoke test:
You should get a JSON array of edge cloud zones (or an empty array).
What the scripts do¶
If you want to understand or run the steps individually:
Bootstrap the KIND cluster¶
This script:
- Checks that
kindandkubectlare installed. - Creates host storage directories:
/tmp/kind-oop/mongodb_srm,mongodb_oeg,mongodb_fm. - Creates the KIND cluster
oop-clusterfromenvironments/kind/cluster.yaml(port mappings and host mounts). - Creates namespace
oop, service accountoop-userand a cluster-admin binding for that account. - Waits for nodes to be Ready.
If the cluster already exists, the script asks whether to delete and recreate. To switch context manually:
Deploy with Helm¶
This script:
- Checks for
environments/kind/secrets.values.yamland adds it to the Helm command if present. - Generates a short-lived token for the
oop-userservice account (used by SRM to talk to the Kubernetes API). - Runs:
helm upgrade --install oop-platform ./oop-platform-chart \
-n oop \
--create-namespace \
-f environments/kind/values.yaml \
-f environments/kind/secrets.values.yaml \ # if file exists
--set srm.srmcontroller.env.kubernetesMasterToken="<token>" \
--set federationManager.enabled=true
Debugging and checking¶
Pod not starting or crashing¶
# List pods and status
kubectl get pods -n oop
# Describe a pod (events, image pull, resource limits)
kubectl describe pod <pod-name> -n oop
# Logs (current)
kubectl logs <pod-name> -n oop
# Logs (follow)
kubectl logs -f <pod-name> -n oop
# Previous container log (after crash)
kubectl logs <pod-name> -n oop --previous
Example for the OEG controller:
Helm release status¶
Node and storage¶
If MongoDB PVCs stay Pending, check that the host paths exist and that the kind node has the correct mounts (see cluster.yaml and bootstrap script).
SRM ↔ Kubernetes token¶
SRM needs a token to create resources in the cluster. The deploy script sets srm.srmcontroller.env.kubernetesMasterToken with a token from oop-user. If you upgrade manually, generate a new token:
Then (from the helm directory):
helm upgrade oop-platform ./oop-platform-chart -n oop \
-f environments/kind/values.yaml \
--set srm.srmcontroller.env.kubernetesMasterToken="<paste-token-here>"
Image pull issues¶
If images are from a private registry, ensure imagePullSecrets or node configuration is set. For kind, the base and kind values use pullPolicy: IfNotPresent so locally built images can be used after loading:
Upgrade and re-deploy¶
To upgrade after changing values or charts (from the helm directory):
TOKEN=$(kubectl -n oop create token oop-user --duration=720h)
helm upgrade oop-platform ./oop-platform-chart -n oop \
-f environments/kind/values.yaml \
-f environments/kind/secrets.values.yaml \
--set srm.srmcontroller.env.kubernetesMasterToken="$TOKEN" \
--set federationManager.enabled=true
Cleanup¶
Uninstall the release and delete the cluster:
Optional: remove host storage:
Deploy on an existing Kubernetes cluster¶
If you already have a Kubernetes cluster (e.g. on-premises, a managed cloud service such as EKS/GKE/AKS, or any other Kubernetes distribution) you can skip the KIND-specific steps entirely and install OOP directly with Helm.
Prerequisites¶
You need kubectl pointing at your cluster and helm v3 installed. The kind tool and deploy-on-kind.sh script are not required.
1. Create the namespace, service account and RBAC¶
SRM needs a service account with cluster-admin rights to manage Kubernetes resources. Run these once against your cluster:
kubectl create namespace oop
kubectl create serviceaccount oop-user -n oop
kubectl create clusterrolebinding oop-user-binding \
--clusterrole=cluster-admin \
--serviceaccount=oop:oop-user
2. Create a custom values file¶
The environments/kind/values.yaml file is KIND-specific (hostPath volumes, manual StorageClass, NodePort services). For a real cluster you need your own values file. Create, for example, environments/my-cluster/values.yaml:
srm:
mongodb:
persistence:
storageClass: "standard" # replace with your cluster's StorageClass
srmcontroller:
service:
type: LoadBalancer # or ClusterIP + Ingress
env:
networkAdapterName: "oai" # set to your network adapter name
oeg:
mongodb:
persistence:
storageClass: "standard"
oegcontroller:
service:
type: LoadBalancer
federationManager:
federationManager:
config:
partner_op:
host: "" # set if using federation
port: ""
service:
type: LoadBalancer
mongodb:
persistence:
storageClass: "standard"
keycloak:
service:
type: LoadBalancer
ai2:
service:
mcp:
type: LoadBalancer
aiAgent:
type: LoadBalancer
secrets:
groqApiKey: "" # set via secrets file (see below)
portal:
service:
type: LoadBalancer
StorageClass
Run kubectl get storageclass to list available StorageClasses in your cluster. Replace "standard" with the appropriate name.
For the Groq API key, create a separate secrets file (e.g. environments/my-cluster/secrets.values.yaml) and keep it out of version control:
3. Generate the SRM token and install¶
From the helm directory:
cd /path/to/oop/helm
# Generate a long-lived token for the SRM service account
TOKEN=$(kubectl -n oop create token oop-user --duration=720h)
# Install (or upgrade) the platform
helm upgrade --install oop-platform ./oop-platform-chart \
-n oop \
--create-namespace \
-f environments/my-cluster/values.yaml \
-f environments/my-cluster/secrets.values.yaml \
--set srm.srmcontroller.env.kubernetesMasterToken="$TOKEN" \
--set federationManager.enabled=true
4. Verify¶
Once pods are Running, use the external IPs or hostnames assigned to the LoadBalancer services (visible in kubectl get svc -n oop) to access each component. The port assignments and component names are the same as described in the Verify deployment section above.
See also¶
- Using the platform — quickstart and workflows once the platform is running.
- Helm README:
README.mdin thehelmdirectory of the repository.