Bug 1668316 - Failed to deploy CNS3.9 on OCP3.11.z, lvm commands fail
Summary: Failed to deploy CNS3.9 on OCP3.11.z, lvm commands fail
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cns-ansible
Version: cns-3.9
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: OCS 3.11.z Batch Update 2
Assignee: Jose A. Rivera
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1669080
Blocks: 1669979
TreeView+ depends on / blocked
 
Reported: 2019-01-22 12:40 UTC by Apeksha
Modified: 2019-03-27 06:44 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-27 06:44:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0670 0 None None None 2019-03-27 06:44:39 UTC

Description Apeksha 2019-01-22 12:40:40 UTC
Description of problem:
Failed to deploy OCS3.9 on OCP3.11.z, lvm commands fail

Version-Release number of selected component (if applicable):
atomic-openshift-3.11.69-1.git.0.7478b86.el7.x86_64
heketi-client-7.0.0-15.el7rhgs.x86_64
rhgs3/rhgs-server-rhel7:v3.9

How reproducible: Twice


Steps to Reproduce:
CI for OCP3.11.z+OCS3.9 failed twice

Fails at the step:
TASK [openshift_storage_glusterfs : Create heketi DB volume]

fatal: [apu1-v311z-ocs-v39-master-0]: FAILED! => {
    "changed": true, 
    "cmd": [
        "oc", 
        "--config=/tmp/openshift-glusterfs-ansible-exsTxk/admin.kubeconfig", 
        "rsh", 
        "--namespace=storage", 
        "deploy-heketi-storage-1-5npz8", 
        "heketi-cli", 
        "-s", 
        "http://localhost:8080", 
        "--user", 
        "admin", 
        "--secret", 
        "admin", 
        "setup-openshift-heketi-storage", 
        "--image", 
        "rhgs3/rhgs-volmanager-rhel7:v3.9", 
        "--listfile", 
        "/tmp/heketi-storage.json"
    ], 
    "delta": "0:01:20.772013", 
    "end": "2019-01-22 10:51:24.019110", 
    "invocation": {
        "module_args": {
            "_raw_params": "oc --config=/tmp/openshift-glusterfs-ansible-exsTxk/admin.kubeconfig rsh --namespace=storage deploy-heketi-storage-1-5npz8 heketi-cli -s http://localhost:8080 --user admin  --secret 'admin' setup-openshift-heketi-storage --image rhgs3/rhgs-volmanager-rhel7:v3.9 --listfile /tmp/heketi-storage.json", 
            "_uses_shell": false, 
            "argv": null, 
            "chdir": null, 
            "creates": null, 
            "executable": null, 
            "removes": null, 
            "stdin": null, 
            "warn": true
        }
    }, 
    "msg": "non-zero return code", 
    "rc": 255, 
    "start": "2019-01-22 10:50:03.247097", 
    "stderr": "Error: Unable to execute command on glusterfs-storage-mjcqr:   /dev/vg_50add4fedc4344a89b21b1d9dfd2981d/lvol0: not found: device not cleared\n  Aborting. Failed to wipe start of new LV.\ncommand terminated with exit code 255", 
    "stderr_lines": [
        "Error: Unable to execute command on glusterfs-storage-mjcqr:   /dev/vg_50add4fedc4344a89b21b1d9dfd2981d/lvol0: not found: device not cleared", 
        "  Aborting. Failed to wipe start of new LV.", 
        "command terminated with exit code 255"
    ], 
    "stdout": "", 
    "stdout_lines": []
}



Expected results:
Install succeeds.

Comment 2 Apeksha 2019-01-22 13:05:47 UTC
The ocs anisble log file and the output of the central CI jenkins job link can be found here - http://rhsqe-repo.lab.eng.blr.redhat.com/cns/bugs/BZ-1668316/

Comment 3 Apeksha 2019-01-22 13:56:14 UTC
Additional info:
openshift-ansible-3.11.70-1

sosreports from all nodes available here -  http://rhsqe-repo.lab.eng.blr.redhat.com/cns/bugs/BZ-1668316/

Comment 4 Niels de Vos 2019-01-24 10:48:15 UTC
Is there an openshift-ansible bug for this?

https://github.com/openshift/openshift-ansible/pull/11068 should address the problem.

Comment 14 errata-xmlrpc 2019-03-27 06:44:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0670


Note You need to log in before you can comment on or make changes to this bug.