Bug 1464393 - CNS installation using advance installer will fail if namespace for cns pods is not "default" namespace
Summary: CNS installation using advance installer will fail if namespace for cns pods ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.6.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 3.6.z
Assignee: Jose A. Rivera
QA Contact: Wenkai Shi
URL:
Whiteboard: aos-scalability-36
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-23 10:40 UTC by Elvir Kuric
Modified: 2017-12-14 21:01 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Use proper namespace for heketi command and service account.
Clone Of:
Environment:
Last Closed: 2017-12-14 21:01:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3438 0 normal SHIPPED_LIVE OpenShift Container Platform 3.6 and 3.5 bug fix and enhancement update 2017-12-15 01:58:11 UTC

Description Elvir Kuric 2017-06-23 10:40:21 UTC
Description of problem:

When using ansible installer to setup cns cluster, it will fail if namespace for cns pods is different from default name space "default" 

Version-Release number of selected component (if applicable):

ocp 3.6 and latest git pull of openshift-ansible 

How reproducible:
2 out of 2 tries 

Steps to Reproduce:
try to setup cns where 
openshift_storage_glusterfs_namespace=

has some value 


Actual results:
ansible playbook will fail 

error message : 

TASK [openshift_storage_glusterfs : Set heketi-cli command] **********************************************************************************************************************************
ok: [ip-172-31-59-4.us-west-2.compute.internal]

TASK [openshift_storage_glusterfs : Verify heketi service] ***********************************************************************************************************************************
fatal: [ip-172-31-59-4.us-west-2.compute.internal]: FAILED! => {
    "changed": false, 
    "cmd": [
        "oc", 
        "rsh", 
        "deploy-heketi-storage-1-t4j17", 
        "heketi-cli", 
        "-s", 
        "http://localhost:8080", 
        "--user", 
        "admin", 
        "--secret", 
        "SEj42AlrU9TReWd5jAaOFvZb+ko0K958gGaNBSke3EM=", 
        "cluster", 
        "list"
    ], 
    "delta": "0:00:00.232417", 
    "end": "2017-06-23 04:17:24.120037", 
    "failed": true, 
    "rc": 1, 
    "start": "2017-06-23 04:17:23.887620"
}

STDERR:

Error from server (NotFound): pods "deploy-heketi-storage-1-t4j17" not found

	to retry, use: --limit @/root/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP ***********************************************************************************************************************************************************************************
ip-172-31-17-225.us-west-2.compute.internal : ok=248  changed=41   unreachable=0    failed=0   
ip-172-31-3-127.us-west-2.compute.internal : ok=233  changed=40   unreachable=0    failed=0   
ip-172-31-3-234.us-west-2.compute.internal : ok=233  changed=40   unreachable=0    failed=0   
ip-172-31-30-118.us-west-2.compute.internal : ok=233  changed=40   unreachable=0    failed=0   
ip-172-31-37-245.us-west-2.compute.internal : ok=248  changed=41   unreachable=0    failed=0   
ip-172-31-54-57.us-west-2.compute.internal : ok=233  changed=40   unreachable=0    failed=0   
ip-172-31-59-4.us-west-2.compute.internal : ok=578  changed=143  unreachable=0    failed=1   
ip-172-31-61-211.us-west-2.compute.internal : ok=248  changed=41   unreachable=0    failed=0   
localhost                  : ok=12   changed=0    unreachable=0    failed=0   


Failure summary:

  1. Host:     ip-172-31-59-4.us-west-2.compute.internal
     Play:     Configure GlusterFS
     Task:     openshift_storage_glusterfs : Verify heketi service
     Message:  ???




Expected results:
ansible to finish 


Additional info:
issue not visible if openshift_storage_glusterfs_namespace= is not populated

Comment 1 Jose A. Rivera 2017-06-26 17:42:29 UTC
This will have been fixed by https://github.com/openshift/openshift-ansible/pull/4534 when it merges.

Comment 2 Jose A. Rivera 2017-06-27 15:01:34 UTC
PR merged.

Comment 3 Wenkai Shi 2017-06-28 09:43:00 UTC
Verified with version openshift-ansible-3.6.126.0-1.git.0.f9c47bf.el7, installation succeed.

# oc get po
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-1-12z27    1/1       Running   0          5m
registry-console-1-rdvsq   1/1       Running   0          4m
router-1-q3pwh             1/1       Running   0          6m

# oc get po -n glusterfs
NAME                      READY     STATUS    RESTARTS   AGE
glusterfs-storage-5hkh1   1/1       Running   0          10m
glusterfs-storage-xccjh   1/1       Running   0          10m
glusterfs-storage-xfvz4   1/1       Running   0          10m
heketi-storage-1-q9zr7    1/1       Running   0          7m

Comment 6 errata-xmlrpc 2017-12-14 21:01:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3438


Note You need to log in before you can comment on or make changes to this bug.