Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1643711

Summary: Installation failed with glusterfs_heketi_route undefined during installation with CNS
Product: OpenShift Container Platform Reporter: Mike Calizo <mcalizo>
Component: InstallerAssignee: Scott Dodson <sdodson>
Status: CLOSED DUPLICATE QA Contact: Johnny Liu <jialiu>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.9.0CC: aos-bugs, jokerman, mmccomas
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-29 20:35:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mike Calizo 2018-10-27 17:28:46 UTC
Description of problem:
Installation of CNS failed every time with this error:

TASK [openshift_storage_glusterfs : Generate GlusterFS StorageClass file] ***********************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_common.yml:286
fatal: [10.74.177.104]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'glusterfs_heketi_route' is undefined"}

Workaround that I found is to downgrade the openshift-ansible version to 


Version-Release number of the following components:
rpm -q openshift-ansible
openshift-ansible-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-roles-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-docs-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-playbooks-3.9.43-1.git.0.d0bc600.el7.noarch

rpm -q ansible
ansible-2.4.6.0-1.el7ae.noarch

ansible --version

ansible 2.4.6.0
  config file = /home/quicklab/ansible.cfg
  configured module search path = [u'/home/quicklab/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]


How reproducible:
100%

Steps to Reproduce:
1. run gluster config playbook with inventory file shown in additional Info. 
   /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

2. downgrade the openshift-ansible version to 3.9.41
openshift-ansible-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-roles-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-docs-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-playbooks-3.9.43-1.git.0.d0bc600.el7.noarch

3. re-run the playbook and it will be successful.



Actual results:
TASK [openshift_storage_glusterfs : Generate GlusterFS StorageClass file] ***********************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_common.yml:286
fatal: [10.74.177.104]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'glusterfs_heketi_route' is undefined"}

Expected results: CNS deployment successful.

Additional info:

Inventory File:

[OSEv3:children]
masters
nodes
etcd
lb
nfs
glusterfs

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=quicklab
ansible_become=yes
debug_level=2
openshift_deployment_type=openshift-enterprise
openshift_release=v3.9
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_file=~/htpasswd
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift.internal.ocpmike39gluster.lab.pnq2.cee.redhat.com
openshift_master_cluster_public_hostname=openshift.ocpmike39gluster.lab.pnq2.cee.redhat.com
openshift_master_default_subdomain=apps.ocpmike39gluster.lab.pnq2.cee.redhat.com
openshift_hosted_registry_replicas=1
openshift_master_api_port=443
openshift_master_console_port=443
openshift_override_hostname_check=true
openshift_disable_check=memory_availability,disk_availability,docker_storage,package_version,docker_image_availability
openshift_storage_glusterfs_storageclass_default=true

# Add Prometheus Metrics:
openshift_hosted_prometheus_deploy=true
openshift_prometheus_node_selector={"region":"infra"}
openshift_prometheus_namespace=openshift-metrics

## Prometheus
openshift_prometheus_storage_kind=glusterfs
openshift_prometheus_storage_access_modes=['ReadWriteOnce']
openshift_prometheus_storage_volume_name=prometheus
openshift_prometheus_storage_volume_size=10Gi
openshift_prometheus_storage_labels={'storage': 'prometheus'}
openshift_prometheus_storage_type='pvc'

# For prometheus-alertmanager
openshift_prometheus_alertmanager_storage_kind=glusterfs
openshift_prometheus_alertmanager_storage_access_modes=['ReadWriteOnce']
openshift_prometheus_alertmanager_storage_volume_name=prometheus-alertmanager
openshift_prometheus_alertmanager_storage_volume_size=10Gi
openshift_prometheus_alertmanager_storage_labels={'storage': 'prometheus-alertmanager'}
openshift_prometheus_alertmanager_storage_type='pvc'

### For prometheus-alertbuffer
openshift_prometheus_alertbuffer_storage_kind=glusterfs
openshift_prometheus_alertbuffer_storage_access_modes=['ReadWriteOnce']
openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer
openshift_prometheus_alertbuffer_storage_volume_size=10Gi
openshift_prometheus_alertbuffer_storage_labels={'storage': 'prometheus-alertbuffer'}
openshift_prometheus_alertbuffer_storage_type='pvc'

# host group for masters
[masters]
10.74.177.104
10.74.177.232
10.74.177.60

[etcd]
10.74.177.104
10.74.177.232
10.74.177.60

[nfs]

[lb]
10.74.177.110

[nodes]
10.74.177.104
10.74.177.232
10.74.177.60
10.74.177.235 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=True
10.74.177.228
10.74.177.220 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=True
10.74.177.121
10.74.177.230 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=True

[glusterfs]
10.74.177.235 glusterfs_devices='[ "/dev/vdc", "/dev/vde", "/dev/vdd" ]'
10.74.177.220 glusterfs_devices='[ "/dev/vdc", "/dev/vde", "/dev/vdd" ]'
10.74.177.230 glusterfs_devices='[ "/dev/vdc", "/dev/vde", "/dev/vdd" ]'

Comment 1 Scott Dodson 2018-10-29 20:35:14 UTC

*** This bug has been marked as a duplicate of bug 1634244 ***