OOM Quick Start Guide

../../../_images/oomLogoV2-medium.png

Once a kubernetes environment is available (follow the instructions in OOM Cloud Setup Guide if you don’t have a cloud environment available), follow the following instructions to deploy ONAP.

Step 1. Clone the OOM repository from ONAP gerrit:

> git clone -b <BRANCH> http://gerrit.onap.org/r/oom --recurse-submodules
> cd oom/kubernetes

where <BRANCH> can be an offical release tag, such as 4.0.0-ONAP for Dublin 5.0.1-ONAP for El Alto

Step 2. Install Helm Plugins required to deploy ONAP:

> sudo cp -R ~/oom/kubernetes/helm/plugins/ ~/.helm

Step 3. Customize the helm charts like oom/kubernetes/onap/values.yaml or an override file like onap-all.yaml, onap-vfw.yaml or openstack.yaml file to suit your deployment with items like the OpenStack tenant information.

Note

Standard and example override files (e.g. onap-all.yaml, openstack.yaml) can be found in the oom/kubernetes/onap/resources/overrides/ directory.
  1. You may want to selectively enable or disable ONAP components by changing the enabled: true/false flags.
  2. Encrypt the OpenStack password using the shell tool for robot and put it in the robot helm charts or robot section of openstack.yaml
  3. Encrypt the OpenStack password using the java based script for SO helm charts or SO section of openstack.yaml.
  4. Update the OpenStack parameters that will be used by robot, SO and APPC helm charts or use an override file to replace them.

a. Enabling/Disabling Components: Here is an example of the nominal entries that need to be provided. We have different values file available for different contexts.

# Copyright © 2019 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
  # Change to an unused port prefix range to prevent port conflicts
  # with other instances running within the same k8s cluster
  nodePortPrefix: 302
  nodePortPrefixExt: 304

  # ONAP Repository
  # Uncomment the following to enable the use of a single docker
  # repository but ONLY if your repository mirrors all ONAP
  # docker images. This includes all images from dockerhub and
  # any other repository that hosts images for ONAP components.
  #repository: nexus3.onap.org:10001
  repositoryCred:
    user: docker
    password: docker

  # readiness check - temporary repo until images migrated to nexus3
  readinessRepository: oomk8s
  # logging agent - temporary repo until images migrated to nexus3
  loggingRepository: docker.elastic.co

  # image pull policy
  pullPolicy: Always

  # default mount path root directory referenced
  # by persistent volumes and log files
  persistence:
    mountPath: /dockerdata-nfs
    enableDefaultStorageclass: false
    parameters: {}
    storageclassProvisioner: kubernetes.io/no-provisioner
    volumeReclaimPolicy: Retain

  # override default resource limit flavor for all charts
  flavor: unlimited

  # flag to enable debugging - application support required
  debugEnabled: false

  #Global ingress configuration
  ingress:
    enabled: false
    virtualhost:
        enabled: true
        baseurl: "simpledemo.onap.org"
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
  enabled: false
aai:
  enabled: false
appc:
  enabled: false
  config:
    openStackType: OpenStackProvider
    openStackName: OpenStack
    openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
    openStackServiceTenantName: default
    openStackDomain: default
    openStackUserName: admin
    openStackEncryptedPassword: admin
cassandra:
  enabled: false
cds:
  enabled: false
clamp:
  enabled: false
cli:
  enabled: false
consul:
  enabled: false
contrib:
  enabled: false
dcaegen2:
  enabled: false
pnda:
  enabled: false
dmaap:
  enabled: false
esr:
  enabled: false
log:
  enabled: false
sniro-emulator:
  enabled: false
oof:
  enabled: false
mariadb-galera:
  enabled: false
msb:
  enabled: false
multicloud:
  enabled: false
nbi:
  enabled: false
  config:
    # openstack configuration
    openStackRegion: "Yolo"
    openStackVNFTenantId: "1234"
policy:
  enabled: false
pomba:
  enabled: false
portal:
  enabled: false
robot:
  enabled: false
  config:
    # openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
sdc:
  enabled: false
sdnc:
  enabled: false

  replicaCount: 1

  mysql:
    replicaCount: 1
so:
  enabled: false

  replicaCount: 1

  liveness:
    # necessary to disable liveness probe when setting breakpoints
    # in debugger so K8s doesn't restart unresponsive container
    enabled: false

  # so server configuration
  config:
    # message router configuration
    dmaapTopic: "AUTO"
    # openstack configuration
    openStackUserName: "vnf_user"
    openStackRegion: "RegionOne"
    openStackKeyStoneUrl: "http://1.2.3.4:5000"
    openStackServiceTenantName: "service"
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"

  # configure embedded mariadb
  mariadb:
    config:
      mariadbRootPassword: password
uui:
  enabled: false
vfc:
  enabled: false
vid:
  enabled: false
vnfsdk:
  enabled: false
modeling:
  enabled: false

b. Generating ROBOT Encrypted Password: The ROBOT encrypted Password uses the same encryption.key as SO but an openssl algorithm that works with the python based Robot Framework.

Note

To generate ROBOT openStackEncryptedPasswordHere:

cd so/resources/config/mso/
/oom/kubernetes/so/resources/config/mso# echo -n "<openstack tenant password>" | openssl aes-128-ecb -e -K `cat encryption.key` -nosalt | xxd -c 256 -p``

c. Generating SO Encrypted Password: The SO Encrypted Password uses a java based encryption utility since the Java encryption library is not easy to integrate with openssl/python that ROBOT uses in Dublin.

Note

To generate SO openStackEncryptedPasswordHere and openStackSoEncryptedPassword ensure default-jdk is installed:

apt-get update; apt-get install default-jdk

Then execute:

SO_ENCRYPTION_KEY=`cat ~/oom/kubernetes/so/resources/config/mso/encryption.key`
OS_PASSWORD=XXXX_OS_CLEARTESTPASSWORD_XXXX

git clone http://gerrit.onap.org/r/integration
cd integration/deployment/heat/onap-rke/scripts

javac Crypto.java
java Crypto "$OS_PASSWORD" "$SO_ENCRYPTION_KEY"
  1. Update the OpenStack parameters:

There are assumptions in the demonstration VNF heat templates about the networking available in the environment. To get the most value out of these templates and the automation that can help confirm the setup is correct, please observe the following constraints.

openStackPublicNetId:
This network should allow heat templates to add interfaces. This need not be an external network, floating IPs can be assigned to the ports on the VMs that are created by the heat template but its important that neutron allow ports to be created on them.
openStackPrivateNetCidr: "10.0.0.0/16"
This ip address block is used to assign OA&M addresses on VNFs to allow ONAP connectivity. The demonstration heat templates assume that 10.0 prefix can be used by the VNFs and the demonstration ip addressing plan embodied in the preload template prevent conflicts when instantiating the various VNFs. If you need to change this, you will need to modify the preload data in the robot helm chart like integration_preload_parametes.py and the demo/heat/preload_data in the robot container. The size of the CIDR should be sufficient for ONAP and the VMs you expect to create.
openStackOamNetworkCidrPrefix: "10.0"
This ip prefix mush match the openStackPrivateNetCidr and is a helper variable to some of the robot scripts for demonstration. A production deployment need not worry about this setting but for the demonstration VNFs the ip asssignment strategy assumes 10.0 ip prefix.

Example Keystone v2.0

global:
  repository: 10.12.5.2:5000
  pullPolicy: IfNotPresent
#################################################################
# This override file configures openstack parameters for ONAP
#################################################################
appc:
  config:
    enableClustering: false
    openStackType: "OpenStackProvider"
    openStackName: "OpenStack"
    openStackKeyStoneUrl: "http://10.12.25.2:5000/v2.0"
    openStackServiceTenantName: "OPENSTACK_TENANTNAME_HERE"
    openStackDomain: "Default"
    openStackUserName: "OPENSTACK_USERNAME_HERE"
    openStackEncryptedPassword: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_PASSWORD_HERE_XXXXXXXXXXXXXXXX"
robot:
  appcUsername: "appc@appc.onap.org"
  appcPassword: "demo123456!"
  openStackKeyStoneUrl: "http://10.12.25.2:5000"
  openStackPublicNetId: "971040b2-7059-49dc-b220-4fab50cb2ad4"
  openStackTenantId: "09d8566ea45e43aa974cf447ed591d77"
  openStackUserName: "OPENSTACK_USERNAME_HERE"
  ubuntu14Image: "ubuntu-14-04-cloud-amd64"
  ubuntu16Image: "ubuntu-16-04-cloud-amd64"
  openStackPrivateNetId: "c7824f00-bef7-4864-81b9-f6c3afabd313"
  openStackPrivateSubnetId: "2a0e8888-f93e-4615-8d28-fc3d4d087fc3"
  openStackPrivateNetCidr: "10.0.0.0/16"
  openStackSecurityGroup: "3a7a1e7e-6d15-4264-835d-fab1ae81e8b0"
  openStackOamNetworkCidrPrefix: "10.0"
  dcaeCollectorIp: "10.12.6.88"
  vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKXDgoo3+WOqcUG8/5uUbk81+yczgwC4Y8ywTmuQqbNxlY1oQ0YxdMUqUnhitSXs5S/yRuAVOYHwGg2mCs20oAINrP+mxBI544AMIb9itPjCtgqtE2EWo6MmnFGbHB4Sx3XioE7F4VPsh7japsIwzOjbrQe+Mua1TGQ5d4nfEOQaaglXLLPFfuc7WbhbJbK6Q7rHqZfRcOwAMXgDoBqlyqKeiKwnumddo2RyNT8ljYmvB6buz7KnMinzo7qB0uktVT05FH9Rg0CTWH5norlG5qXgP2aukL0gk1ph8iAt7uYLf1ktp+LJI2gaF6L0/qli9EmVCSLr1uJ38Q8CBflhkh"
  demoArtifactsVersion: "1.4.0-SNAPSHOT"
  demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
  scriptVersion: "1.4.0-SNAPSHOT"
  rancherIpAddress: "10.12.5.127"
  config:
    # openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment
    openStackEncryptedPasswordHere: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_ENCRYPTED_PASSWORD_HERE_XXXXXXXXXXXXXXXX"
so:
  # so server configuration
  so-catalog-db-adapter:
    config:
      openStackUserName: "OPENSTACK_USERNAME_HERE"
      openStackKeyStoneUrl: "http://10.12.25.2:5000/v2.0"
      openStackEncryptedPasswordHere: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_ENCRYPTED_PASSWORD_HERE_XXXXXXXXXXXXXXXX"

Example Keystone v3 (required for Rocky and later releases)

global:
  repository: 10.12.5.2:5000
  pullPolicy: IfNotPresent
#################################################################
# This override file configures openstack parameters for ONAP
#################################################################
robot:
  enabled: true
  flavor: large
  appcUsername: "appc@appc.onap.org"
  appcPassword: "demo123456!"
  # KEYSTONE Version 3  Required for Rocky and beyond
  openStackKeystoneAPIVersion: "v3"
  # OS_AUTH_URL without the /v3 from the openstack .RC file
  openStackKeyStoneUrl: "http://10.12.25.2:5000"
  # OS_PROJECT_ID from the openstack .RC file
  openStackTenantId: "09d8566ea45e43aa974cf447ed591d77"
  # OS_USERNAME from the openstack .RC file
  openStackUserName: "OS_USERNAME_HERE"
  #  OS_PROJECT_DOMAIN_ID from the openstack .RC file
  #  in some environments it is a string but in other environmens it may be a numeric
  openStackDomainId:  "default"
  #  OS_USER_DOMAIN_NAME from the openstack .RC file
  openStackUserDomain:  "Default"
  openStackProjectName: "OPENSTACK_PROJECT_NAME_HERE"
  ubuntu14Image: "ubuntu-14-04-cloud-amd64"
  ubuntu16Image: "ubuntu-16-04-cloud-amd64"
  openStackPublicNetId: "971040b2-7059-49dc-b220-4fab50cb2ad4"
  openStackPrivateNetId: "83c84b68-80be-4990-8d7f-0220e3c6e5c8"
  openStackPrivateSubnetId: "e571c1d1-8ac0-4744-9b40-c3218d0a53a0"
  openStackPrivateNetCidr: "10.0.0.0/16"
  openStackOamNetworkCidrPrefix: "10.0"
  openStackSecurityGroup: "bbe028dc-b64f-4f11-a10f-5c6d8d26dc89"
  dcaeCollectorIp: "10.12.6.109"
  vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKXDgoo3+WOqcUG8/5uUbk81+yczgwC4Y8ywTmuQqbNxlY1oQ0YxdMUqUnhitSXs5S/yRuAVOYHwGg2mCs20oAINrP+mxBI544AMIb9itPjCtgqtE2EWo6MmnFGbHB4Sx3XioE7F4VPsh7japsIwzOjbrQe+Mua1TGQ5d4nfEOQaaglXLLPFfuc7WbhbJbK6Q7rHqZfRcOwAMXgDoBqlyqKeiKwnumddo2RyNT8ljYmvB6buz7KnMinzo7qB0uktVT05FH9Rg0CTWH5norlG5qXgP2aukL0gk1ph8iAt7uYLf1ktp+LJI2gaF6L0/qli9EmVCSLr1uJ38Q8CBflhkh"
  demoArtifactsVersion: "1.4.0"
  demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
  scriptVersion: "1.4.0"
  rancherIpAddress: "10.12.6.160"
  config:
    # use the python utility to encrypt the OS_PASSWORD for the OS_USERNAME
    openStackEncryptedPasswordHere: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_PYTHON_PASSWORD_HERE_XXXXXXXXXXXXXXXX"
    openStackSoEncryptedPassword:  "YYYYYYYYYYYYYYYYYYYYYYYY_OPENSTACK_JAVA_PASSWORD_HERE_YYYYYYYYYYYYYYYY"
so:
  enabled: true
  so-catalog-db-adapter:
    config:
      openStackUserName: "OS_USERNAME_HERE"
      # OS_AUTH_URL (keep the /v3) from the openstack .RC file
      openStackKeyStoneUrl: "http://10.12.25.2:5000/v3"
      # use the SO Java utility to encrypt the OS_PASSWORD for the OS_USERNAME
      openStackEncryptedPasswordHere: "YYYYYYYYYYYYYYYYYYYYYYYY_OPENSTACK_JAVA_PASSWORD_HERE_YYYYYYYYYYYYYYYY"
appc:
  enabled: true
  replicaCount: 3
  config:
    enableClustering: true
    openStackType: "OpenStackProvider"
    openStackName: "OpenStack"
    openStackKeyStoneUrl: "http://10.12.25.2:5000/v3"
    openStackServiceTenantName: "OPENSTACK_PROJECT_NAME_HERE"
    openStackDomain: "OPEN_STACK_DOMAIN_NAME_HERE"
    openStackUserName: "OS_USER_NAME_HERE"
    openStackEncryptedPassword: "OPENSTACK_CLEAR_TEXT_PASSWORD_HERE"

Step 4. To setup a local Helm server to server up the ONAP charts:

> helm serve &

Note the port number that is listed and use it in the Helm repo add as follows:

> helm repo add local http://127.0.0.1:8879

Step 5. Verify your Helm repository setup with:

> helm repo list
NAME   URL
local  http://127.0.0.1:8879

Step 6. Build a local Helm repository (from the kubernetes directory):

> make all; make onap

Step 7. Display the onap charts that available to be deployed:

> helm search onap -l
NAME                	CHART VERSION	APP VERSION	DESCRIPTION                                 
local/onap                	5.0.0        	Dublin  Open Network Automation Platform (ONAP)
local/aaf                 	5.0.0        	        ONAP Application Authorization Framework
local/aai                 	5.0.0        	        ONAP Active and Available Inventory
local/appc                	5.0.0        	        Application Controller
local/cassandra           	5.0.0        	        ONAP cassandra
local/cds                 	5.0.0        	        ONAP Controller Design Studio (CDS)
local/clamp               	5.0.0        	        ONAP Clamp
local/cli                 	5.0.0        	        ONAP Command Line Interface
local/common              	5.0.0        	        Common templates for inclusion in other charts
local/consul              	5.0.0        	        ONAP Consul Agent
local/contrib             	5.0.0        	        ONAP optional tools
local/dcaegen2            	5.0.0        	        ONAP DCAE Gen2
local/dgbuilder           	5.0.0        	        D.G. Builder application
local/dmaap               	5.0.0        	        ONAP DMaaP components
local/esr                 	5.0.0        	        ONAP External System Register
local/log                 	5.0.0        	        ONAP Logging ElasticStack
local/mariadb-galera      	5.0.0        	        Chart for MariaDB Galera cluster
local/mongo               	5.0.0        	        MongoDB Server
local/msb                 	5.0.0        	        ONAP MicroServices Bus
local/multicloud          	5.0.0        	        ONAP multicloud broker
local/music               	5.0.0        	        MUSIC - Multi-site State Coordination Service
local/mysql               	5.0.0        	        MySQL Server
local/nbi                 	5.0.0        	        ONAP Northbound Interface
local/network-name-gen    	5.0.0        	        Name Generation Micro Service
local/nfs-provisioner     	5.0.0        	        NFS provisioner
local/oof                 	5.0.0        	        ONAP Optimization Framework
local/pnda                	5.0.0        	        ONAP DCAE PNDA
local/policy              	5.0.0        	        ONAP Policy Administration Point
local/pomba               	5.0.0        	        ONAP Post Orchestration Model Based Audit
local/portal              	5.0.0        	        ONAP Web Portal
local/postgres            	5.0.0        	        ONAP Postgres Server
local/robot               	5.0.0        	        A helm Chart for kubernetes-ONAP Robot
local/sdc                 	5.0.0        	        Service Design and Creation Umbrella Helm charts
local/sdnc                	5.0.0        	        SDN Controller
local/sdnc-prom           	5.0.0        	        ONAP SDNC Policy Driven Ownership Management
local/sniro-emulator      	5.0.0        	        ONAP Mock Sniro Emulator
local/so                  	5.0.0        	        ONAP Service Orchestrator
local/uui                 	5.0.0        	        ONAP uui
local/vfc                 	5.0.0        	        ONAP Virtual Function Controller (VF-C)
local/vid                 	5.0.0        	        ONAP Virtual Infrastructure Deployment
local/vnfsdk              	5.0.0        	        ONAP VNF SDK

Note

The setup of the Helm repository is a one time activity. If you make changes to your deployment charts or values be sure to use make to update your local Helm repository.

Step 8. Once the repo is setup, installation of ONAP can be done with a single command

Note

The --timeout 900 is currently required in Dublin to address long running initialization tasks for DMaaP and SO. Without this timeout value both applications may fail to deploy.

To deploy all ONAP applications use this command:

> cd oom/kubernetes
> helm deploy dev local/onap --namespace onap -f onap/resources/overrides/onap-all.yaml -f onap/resources/overrides/environment.yaml -f onap/resources/overrides/openstack.yaml --timeout 900

All override files may be customized (or replaced by other overrides) as per needs.

onap-all.yaml
Enables the modules in the ONAP deployment. As ONAP is very modular, it is possible to customize ONAP and disable some components through this configuration file.
environment.yaml

Includes configuration values specific to the deployment environment.

Example: adapt readiness and liveness timers to the level of performance of your infrastructure

openstack.yaml
Includes all the Openstack related information for the default target tenant you want to use to deploy VNFs from ONAP and/or additional parameters for the embedded tests.

Step 9. Verify ONAP installation

Use the following to monitor your deployment and determine when ONAP is ready for use:

> kubectl get pods -n onap -o=wide

Note

While all pods may be in a Running state, it is not a guarantee that all components are running fine.

Launch the healthcheck tests using Robot to verify that the components are healthy:

> ~/oom/kubernetes/robot/ete-k8s.sh onap health

Step 10. Undeploy ONAP:

> helm undeploy dev --purge

More examples of using the deploy and undeploy plugins can be found here: https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins