The Integration project provides the following artifacts:

  • Heat template to deploy the virtual ressources needed for the ONAP deployment
  • Test suites and tools to check the various ONAP components based on Robot Framework
  • Artifacts and documentation for the use-case deployments

Integration Environement Installation

ONAP is deployed on top of kubernetes through the OOM installer. Kubernetes can be installed on bare metal or on different environments such as OpenStack (private or public cloud), Azure, AWS,..

The integration team maintains a heat template to install ONAP on OpenStack. This template creates the needed resources (VMs, networks, security groups, …) in order to support a HA Kubernetes then a full ONAP installation.

Sample OpenStack RC (credential) files environment files or deployment scripts are provided, they correspond to files used on windriver environment. This environment is used by the integration team to validate the installation, perform tests and troubleshoot.

If you intend to deploy your own environement, they can be used as reference but must be adapted according to your context.

Heat Template Description

The ONAP Integration Project provides a sample HEAT template that fully automates the deployment of ONAP using OOM as described in ONAP Operations Manager (OOM) over Kubernetes.

The ONAP OOM HEAT template deploys the entire ONAP platform. It spins up an HA-enabled Kubernetes cluster, and deploys ONAP using OOM onto this cluster.

  • 1 Shared NFS server (called Rancher VM for legacy reasons)
  • 3 orch VMs for Kubernetes HA controller and etcd roles
  • 12 k8s VMs for Kubernetes HA worker roles

See OOM documentation for details.

Quick Start

Using the Wind River lab configuration as an example, here is what you need to do to deploy ONAP:

git clone
cd integration/deployment/heat/onap-rke/
source ./env/windriver/Integration-SB-00-openrc
./scripts/ ./env/windriver/onap-oom.env

Environment and RC files

Before deploying ONAP to your own environment, it is necessary to customize the environment and RC files. You should make a copy of the sample RC and environment files shown above and customize the values for your specific OpenStack environments.

The environment file contains a block called integration_override_yaml.

The content of this block will be used by OOM to overwrite some parts of its installation parameters used in the helm charts.

This file may deal with:

  • Cloud adaptation (use the defined flavors, available images)
  • Proxies (apt, docker,..)
  • Pre-defined resources for use cases (networks, tenant references)
  • performance tuning (initialization timers)

Performance tuning reflects the adaptation to the hardware at a given time. The lab may evolve and the timers shall follow.

Be sure to customize the necessary values within this block to match your OpenStack environment as well.

Notes on select parameters


rancher_vm_flavor: m1.large
k8s_vm_flavor: m1.xlarge
etcd_vm_flavor: m1.medium # not currently used
orch_vm_flavor: m1.medium

key_name: onap_key

helm_deploy_delay: 2.5m

It is recommended that you set up an apt proxy and a docker proxy local to your lab. If you do not wish to use such proxies, you can set the apt_proxy and docker_proxy parameters to the empty string “”.

rancher_vm_flavor needs to have 8 GB of RAM. k8s_vm_flavor needs to have at least 16 GB of RAM. orch_vm_flavor needs to have 4 GB of RAM. By default the template assumes that you have already imported a keypair named “onap_key” into your OpenStack environment. If the desired keypair has a different name, change the key_name parameter.

The helm_deploy_delay parameter introduces a delay in-between the deployments of each ONAP helm subchart to help alleviate system load or contention issues caused by trying to spin up too many pods simultaneously. The value of this parameter is passed to the Linux “sleep” command. Adjust this parameter based on the performance and load characteristics of your OpenStack environment.

Exploring the Rancher VM

The Rancher VM that is spun up by this HEAT template serves the following key roles: - Hosts the /dockerdata-nfs/ NFS export shared by all the k8s VMs for persistent

  • git clones the oom repo into /root/oom
  • git clones the integration repo into /root/integration
  • Creates the helm override file at /root/integration-override.yaml
  • Deploys ONAP using helm and OOM

Integration Continuous Integration Guide

Continuous Integration is key due to the complexity of the ONAP projects. Several chains have been created:

  • Daily stable chain
  • Daily master chain
  • Gating: On demand deployment of a full ONAP solution to validate patchsets

They are run on different environments (Orange labs, DT labs, Azure Cloud).

The following document will detail these chains and how you could set up such chains and/or provide test results to the community.

Integration CI Ecosystem


The global ecosystem can de described as follows:


Several chains are run in ONAP. The CI chains are triggered from different CI systems (Jenkins or gitlab-ci) (1) on different target environments hosted on community labs (Windriver, Orange, DT, E///) or Azure clouds. Jobs (installation, tests) are executed on these labs (2). At the end, the results are pushed through the OPNFV test API (3) to a test database (4) hosted by Linux Foundation on Results can be reported in different web pages hosted on LF or on (5).

Daily Chains

CI daily chains (Master and last Stable) are run on Orange, DT using gitlab-ci jobs and Ericsson using jenkins jobs.


OOM gating has been introduced for El Alto. It consists of a deployment followed by a set of tests on patchsets submitted to OOM repository.

The CI part is managed on and the deployment is executed on ONAP Orange lab and Azure clouds. The goal is to provide a feedback - and ultimately to vote - on code change prior to merge to consolidate the OOM Master branch.

The developer can evaluate the consequences of his/her patchset on a fresh installation.

The gating is triggered in 2 scenarios:

  • new patchset in OOM
  • comment with the magic word oom_redeploy is posted in the Gerrit’s comment section

The procedure to submit new feature in CI is done in 3 steps as described in the figure below:


Visualization of the CI pipelines

As the CI chains are triggered from different systems, several web interfaces can be used to visualize them.

A web site has been created to centralize the links on


For Gating and based CI chains, the pipelines consist in pipelines of pipelines managed through the chaining of .gitlab-ci.yml file thanks to an Open Source deployment called chained-ci ( A visualization tool is available to list all your chains as described in the figure below:


If you click on any element of the chain, you will open a new window:


In order to provide the logs to the developer an additional web page has been created to summarize the tests and grant access to their associated logs:


Additionnaly, for the daily chain, another page displays the results as time series, allowing to see the evolution of the tests over time.


Setup Your Own CI Chains

If you want to setup a based CI chain, and want to use chained-ci, you can follow the tutorial on

You should be able to chain your automation projects:

  • Create resources
  • Deployment of Kubernetes
  • Test of your Kubernetes (using OPNFV functest-k8s tests)
  • Deployment of your ONAP (you can use your own automatic installation procedure or
  • Test ONAP thanks to the differnet ONAP xtesting dockers covering infrastructure healthcheck, components healthcheck tests, end to end tests, security tests.

If you want to report your results to the community, do not hesitate to contact the integration team. The Test database is public but the pods must be declared to be allowed to report results from third party labs.

ONAP Integration Testing Gate

5 categories have been defined for the ONAP integration testing gate:

  • infrastructure healthcheck: verify ONAP from a k8S perspective. It includes 2 tests: onap-k8s (all the deployments, jobs, statefulste,..must be OK at the end of an installation), onap-helm (all the helm chart must be completed at the end of the installation)
  • healthcheck: the traditionnal robot tests run from the cluster to perform tests on the different components.
  • smoke-usecases: End to end tests
  • candidate-usecases: New end to end tests introduced in the automation chain for the release
  • security tests (security of kubernetes (CVE, CIS tests) and ONAP (exposed ports, check the containers run as root,…))
  • benchmarking (robustness, stress tests): not yet available

All these tests have been packaged thanks to the OPNFV Open Source tool xtesting. Xtesting is a python package allowing to unify the way to declare, run tests. It also ensures a consistent way to get the test results whatever the test framework used (python, robotframework, bash, …). It includes the mechanism to automatically push the results to the test database using the test API. It simplifies the integration in CI.

The package can be found here

The different ONAP xtesting dockers can be found on

As an illustration, you can run the infrastructure healthcheck by typing the following command:

docker run -v <the kube config>:/root/.kube/config -v <result directory>:

All the xtesting tests are included in Daily and gating chains. Please note that you can build your own onap-xtesting docker if you want to include your tests. See for details.