Skip to content

EURECOM NEF & Core Simulator

This page explains how to deploy the EURECOM open-exposure NEF & Core Simulator stack and connect it to a running OOP instance.

OOP must already be deployed

This guide assumes you have a working OOP deployment on KIND (or another Kubernetes cluster). See Deploy on KIND and Kubernetes if you need to set that up first.


Tested component versions

All services in the stack are built from the develop branch of the open-exposure repositories. The table below records the exact commit each container was built from at the time this integration was validated.

Container Image Git commit (develop branch) Image built
3gpp-monitoring-event openexposure/monitoring-event:develop 64d7ee3661e044eb8b08d130e1c25705150b0945 2026-02-18
3gpp-ueid openexposure/ue-id:develop 201321a10ff3449154daf0134a91d2c40d86c7af 2026-01-29
3gpp-as-session-with-qos openexposure/as-session-with-qos:develop ae60c8db252bd4ddc429f7ff73cc5ad2cbb1a532 2026-01-29
3gpp-traffic-influence openexposure/traffic-influence:develop 0b2d54f2f310f00df3a2af2999b79d1021be4a01 2026-01-29
3gpp-ue-address openexposure/ue-address:develop d1c06031b7be7fdd592f218168b33f03143f6099 2026-01-29
core-simulator openexposure/core-simulator:develop 723b1b4f9b540f70fe0f60aedc68d8fffe31d2f5 2026-01-30
core-network-service openexposure/core-network-service:develop (not logged) 2026-02-18
ue-profile-service openexposure/ue-profile-service:develop 8e9fbb5383e183ce409b4013980f09c61bda591b 2025-09-29
ue-identity-service openexposure/ue-identity-service:develop 6eaabbfae7f4f7dd866608f0a5355236bb80f179 2025-09-29
dataset-exporter openexposure/dataset-exporter:develop 8ae3a2f054da1f271fdbd2b05f9625b75916b497 2025-09-29

Prerequisites

Tool Minimum version Purpose
Docker 20.x Container runtime for the NEF stack
Docker Compose v2.0+ Orchestration for NEF services
kind (or another K8s cluster) 0.20+ The cluster where OOP is running
docker --version
docker compose version

1 — Deploy the NEF & Core Simulator stack

Clone the repository

git clone https://gitlab.eurecom.fr/open-exposure/nef/deployment.git
cd deployment/nef-coresim-compose

Review configuration (optional)

Key files you may want to inspect or customise before starting:

File Purpose
docker-compose.yaml Service definitions and networking
config/coreSimulator.yaml Core simulator settings (PLMN, DNN, slices, UE count)
config/*.yaml Service-specific configurations

The core simulator defaults are usually sufficient for a first run. If you want to change them:

# config/coreSimulator.yaml
simulationProfile:
  plmn:
    mcc: "001"
    mnc: "01"
  dnn: "internet"
  slice:
    sst: 1
    sd: "FFFFFF"
  numOfUe: 20
  numOfgNB: 10
  arrivalRate: 2

Configure QoS profiles

By default the NEF ships with only one CAMARA QoS profile (qos-e). To use all four profiles supported by the OOP adapter (qos-e, qos-s, qos-m, qos-l), update config/asSessionWithQos.yaml so the qosConfig section reads:

# config/asSessionWithQos.yaml
qosConfig:
  qos-e:
    marBwDl: 120000
    marBwUl: 120000
    mediaType: CONTROL
  qos-s:
    marBwDl: 240000
    marBwUl: 240000
    mediaType: CONTROL
  qos-m:
    marBwDl: 480000
    marBwUl: 480000
    mediaType: CONTROL
  qos-l:
    marBwDl: 960000
    marBwUl: 960000
    mediaType: VIDEO

Start the stack

docker compose up -d
docker compose ps

All containers should reach Up (healthy):

NAME                      STATUS          PORTS
3gpp-as-session-with-qos  Up (healthy)
3gpp-monitoring-event     Up (healthy)
3gpp-traffic-influence    Up (healthy)
3gpp-ue-address           Up (healthy)
3gpp-ueid                 Up (healthy)
core-network-service      Up
core-simulator            Up (healthy)    0.0.0.0:8080->8080/tcp
dataset-exporter          Up (healthy)
ingress-controller        Up              0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
redis                     Up              0.0.0.0:6379->6379/tcp
redisinsight              Up              0.0.0.0:8001->5540/tcp
ue-identity-service       Up (healthy)
ue-profile-service        Up (healthy)
grafana                   Up              0.0.0.0:3000->3000/tcp
prometheus                Up

Access points

Service URL
Core Simulator API http://localhost:8080
Grafana dashboard http://localhost:3000 (anonymous access)
Redis Insight http://localhost:8001
Ingress (northbound APIs) http://localhost:80

Initialise the core simulator

Start the simulation so that UEs register with the core network:

# From the deployment/nef-coresim-compose directory
docker compose exec core-simulator /bin/sh -c "cnsim-cli"

Inside the simctl prompt, run:

start

For more documentation on the open-exposure NEF & Core Simulator see https://gitlab.eurecom.fr/open-exposure.


2 — Connect the NEF to the OOP Kubernetes network

The NEF runs in Docker Compose while OOP runs inside a KIND (Kubernetes-in-Docker) cluster. They are on separate Docker networks by default. The NEF stack includes an nginx ingress controller that routes requests to the individual NEF services by path prefix (/3gpp-as-session-with-qos, /3gpp-traffic-influence, /3gpp-monitoring-event, etc.). By connecting this single container to the KIND network, all NEF APIs become reachable from OOP.

Attach the ingress controller

docker network connect kind ingress-controller

Obtain the ingress IP on the KIND network

docker inspect ingress-controller \
  --format='{{range .NetworkSettings.Networks}}{{.NetworkID}} {{.IPAddress}}{{"\n"}}{{end}}' \
  | grep kind

You will see output like <network-id> 172.18.0.4. Note the IP address — it is needed in the next step.

A one-liner to capture it into a variable:

NEF_IP=$(docker inspect ingress-controller \
  --format='{{range .NetworkSettings.Networks}}{{if eq .NetworkID "'$(docker network inspect kind -f '{{.Id}}')'"}}{{.IPAddress}}{{end}}{{end}}')
echo "NEF ingress IP on KIND network: $NEF_IP"

3 — Configure OOP to use the NEF

The SRM (Service Resource Manager) is the OOP component that talks to southbound adapters. You must tell it where the NEF lives and which SCS/AS identifier to use.

Update Helm values

Open helm/environments/kind/values.yaml and set the following three keys under srm.srmcontroller.env, replacing the IP with the NEF_IP obtained above:

srm:
  srmcontroller:
    env:
      networkAdapterName: "oai"
      networkAdapterBaseUrl: "http://<NEF_IP>:80"   # e.g. http://172.18.0.4:80
      scsAsId: "scs-test"

networkAdapterName is already set to "oai" in the KIND values file. You only need to fill in networkAdapterBaseUrl and scsAsId.

Verify the SRM deployment template

The SRM deployment template must map these values to environment variables. Confirm the following entries exist in oop-platform-chart/charts/srm/templates/srmcontroller-deployment.yaml:

env:
  - name: NETWORK_ADAPTER_NAME
    value: {{ .Values.srmcontroller.env.networkAdapterName | quote }}
  - name: NETWORK_ADAPTER_BASE_URL
    value: {{ .Values.srmcontroller.env.networkAdapterBaseUrl | quote }}
  - name: SCS_AS_ID
    value: {{ .Values.srmcontroller.env.scsAsId | quote }}

Apply the changes

From the helm directory, upgrade the release so the SRM picks up the new values. The SRM needs a Kubernetes service-account token to talk to the cluster API — generate one and pass it in the same command:

cd /path/to/oop/helm
TOKEN=$(kubectl -n oop create token oop-user --duration=720h)
helm upgrade oop-platform ./oop-platform-chart \
  -n oop \
  -f environments/kind/values.yaml \
  -f environments/kind/secrets.values.yaml \
  --set federationManager.enabled=true \
  --set srm.srmcontroller.env.kubernetesMasterToken="$TOKEN"

Omit -f environments/kind/secrets.values.yaml if you have not created that file.


4 — Verify the integration

Check SRM environment variables

kubectl exec -n oop deployment/srmcontroller -- printenv | grep -E "NETWORK_ADAPTER|SCS_AS_ID"

Expected output:

NETWORK_ADAPTER_NAME=oai
NETWORK_ADAPTER_BASE_URL=http://172.18.0.4:80
SCS_AS_ID=scs-test

Confirm UEs are registered

docker logs core-simulator 2>&1 | grep "PDU Session.*established" | tail -5

You should see lines like:

PDU Session 1 established (dnn=internet, snssai={Sst:1 Sd:0xc00022c00}, ip=12.1.0.20)

Once you see registered UEs the full chain is ready. Head over to Using the platform — QoD sessions via the OEG to create, retrieve and delete QoD sessions.


Request flow

When a CAMARA API call (QoD, traffic influence, location, etc.) is made through the OEG:

  1. The OEG receives the CAMARA request and forwards it to the SRM.
  2. The SRM selects the configured southbound adapter (oai).
  3. The TF-SDK translates the CAMARA request into the adapter-specific model and builds the URL using the networkAdapterBaseUrl (e.g. http://<IP>:80/3gpp-as-session-with-qos/v1/scs-test/subscriptions for QoD).
  4. The request hits the NEF ingress controller, which routes it by path prefix to the correct NEF service.
  5. The NEF service processes the request and interacts with the Core Simulator.

Troubleshooting

NEF unreachable from Kubernetes

Symptom: SRM returns "Connection refused" or "Internal Server Error" when calling NEF APIs.

# Verify the ingress controller is on the kind network
docker inspect ingress-controller | grep -A 10 '"kind"'

# Re-attach if needed
docker network connect kind ingress-controller

Then update networkAdapterBaseUrl in environments/kind/values.yaml with the correct IP and re-run the upgrade command from step 3.

SCS_AS_ID is None or empty

Symptom: SRM logs show a URL containing /None/ or scs_as_id: ''.

kubectl exec -n oop deployment/srmcontroller -- printenv | grep SCS_AS_ID

If the variable is missing, set scsAsId in environments/kind/values.yaml (see step 3) and re-run the upgrade command from step 3.

UE not connected to the network

Symptom: NEF returns 404 Not Found — requested UE is not connected to the network.

Use actual UE IPs from the core simulator logs (12.1.0.x range), not placeholder addresses like 203.0.113.15.

docker logs core-simulator 2>&1 | grep "PDU Session.*established"

Wrong QoS profile format

Symptom: QoS profile 'QOS_E' not supported.

Profile names must be lowercase and hyphenated: qos-e, qos-s, qos-m, qos-l.

Quick reference

Issue Cause Fix
Connection refused to NEF Containers on different Docker networks docker network connect kind ingress-controller
URL contains /None/ Missing SCS_AS_ID env var Add env var to SRM deployment template
UE not connected Using a fake IP address Use real UE IPs from docker logs core-simulator
QoS profile not supported Wrong case Use lowercase: qos-e, not QOS_E

Cleanup

To tear down only the NEF & Core Simulator stack (this does not affect OOP):

cd deployment/nef-coresim-compose

# Stop services (keeps data)
docker compose stop

# Stop and remove containers (keeps volumes)
docker compose down

# Remove everything including volumes
docker compose down -v