Description of problem: Failed to deploy OCS3.10 on OCP3.11.z, lvm commands fail Version-Release number of selected component (if applicable): atomic-openshift-3.11.69-1.git.0.7478b86.el7.x86_64 heketi-client-7.0.0-15.el7rhgs.x86_64 rhgs3/rhgs-server-rhel7:v3.10 openshift-ansible-3.11.70-1 How reproducible: Twice Steps to Reproduce: CI for OCP3.11.z+OCS3.10 failed twice Fails at the step: TASK [openshift_storage_glusterfs : Create heketi DB volume] fatal: [apu1-v311z-ocs-v310-master-0]: FAILED! => { "changed": true, "cmd": [ "oc", "--config=/tmp/openshift-glusterfs-ansible-fHBHOU/admin.kubeconfig", "rsh", "--namespace=storage", "deploy-heketi-storage-1-ccdqs", "heketi-cli", "-s", "http://localhost:8080", "--user", "admin", "--secret", "admin", "setup-openshift-heketi-storage", "--image", "rhgs3/rhgs-volmanager-rhel7:v3.10", "--listfile", "/tmp/heketi-storage.json" ], "delta": "0:01:02.884930", "end": "2019-01-22 12:23:45.129174", "invocation": { "module_args": { "_raw_params": "oc --config=/tmp/openshift-glusterfs-ansible-fHBHOU/admin.kubeconfig rsh --namespace=storage deploy-heketi-storage-1-ccdqs heketi-cli -s http://localhost:8080 --user admin --secret 'admin' setup-openshift-heketi-storage --image rhgs3/rhgs-volmanager-rhel7:v3.10 --listfile /tmp/heketi-storage.json", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "msg": "non-zero return code", "rc": 255, "start": "2019-01-22 12:22:42.244244", "stderr": "Error: WARNING: This metadata update is NOT backed up.\n /dev/vg_cd2fa2da8c597b187b27ef1e86fc1036/lvol0: not found: device not cleared\n Aborting. Failed to wipe start of new LV.\ncommand terminated with exit code 255", "stderr_lines": [ "Error: WARNING: This metadata update is NOT backed up.", " /dev/vg_cd2fa2da8c597b187b27ef1e86fc1036/lvol0: not found: device not cleared", " Aborting. Failed to wipe start of new LV.", "command terminated with exit code 255" ], "stdout": "", "stdout_lines": [] } Expected results: Install succeeds.
The ocs anisble log file, the output of the central CI jenkins job and sosreports from all nodes available here - http://rhsqe-repo.lab.eng.blr.redhat.com/cns/bugs/BZ-1668335/
Is there an openshift-ansible bug for this? https://github.com/openshift/openshift-ansible/pull/11068 should address the problem.
Hello Niels and John, Could you please provide a bug fix doc text (CCFR--> Format) and change the doctype too.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0670