Created attachment 1489064 [details] error I've noticed CNS being busted on 3.11 puddles for awhile now. This is v3.11.18. It looks like the VG gets created but not the LVs. Screenshot and ansible inventory attached. TASK [openshift_storage_glusterfs : Create heketi DB volume] ********* fatal: [master01.ocp.nicknach.net]: FAILED! => {"changed": true, "cmd": ["oc", "--config=/tmp/openshift-glusterfs-ansible-xMRphY/admin.kubeconfig", "rsh", "--namespace=glusterfs", "deploy-heketi-storage-1-dtkf9", "heketi-cli", "-s", "http://localhost:8080", "--user", "admin", "--secret", "F/rdBQL2+q+sUReEiamoJ7bPq0w/+flBmZmQekzaLxU=", "setup-openshift-heketi-storage", "--image", "repo.home.nicknach.net/rhgs3/rhgs-volmanager-rhel7:latest", "--listfile", "/tmp/heketi-storage.json"], "delta": "0:01:03.272923", "end": "2018-10-01 09:42:27.400457", "msg": "non-zero return code", "rc": 255, "start": "2018-10-01 09:41:24.127534", "stderr": "Error: WARNING: This metadata update is NOT backed up.\n /dev/mapper/vg_02fcef822844a049c1a01fcc78da8342-lvol0: open failed: No such file or directory\n /dev/vg_02fcef822844a049c1a01fcc78da8342/lvol0: not found: device not cleared\n Aborting. Failed to wipe start of new LV.\ncommand terminated with exit code 255", "stderr_lines": ["Error: WARNING: This metadata update is NOT backed up.", " /dev/mapper/vg_02fcef822844a049c1a01fcc78da8342-lvol0: open failed: No such file or directory", " /dev/vg_02fcef822844a049c1a01fcc78da8342/lvol0: not found: device not cleared", " Aborting. Failed to wipe start of new LV.", "command terminated with exit code 255"], "stdout": "", "stdout_lines": []}
Created attachment 1489065 [details] notfound
Created attachment 1489066 [details] containerlist
Created attachment 1489069 [details] ansibleinventory
Here's the full error from the deploy pod: [kubeexec] ERROR 2018/10/01 14:41:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K -- thin vg_02fcef822844a049c1a01fcc78da8342/tp_fed65e0f23af57d2b539502addc85599 --virtualsize 2097152K --name brick_fed65e0f23af57d2b539502addc85599] on glusterfs-storage-n46w7: Err[command terminated with exit cod e 5]: Stdout [ Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. ]: Stderr [ WARNING: This metadata update is NOT backed up. /dev/mapper/vg_02fcef822844a049c1a01fcc78da8342-lvol0: open failed: No such file or directory /dev/vg_02fcef822844a049c1a01fcc78da8342/lvol0: not found: device not cleared Aborting. Failed to wipe start of new LV.
Is this on CRI-O?
Oops, missed the attachment. It is on CRI-O. This is likely a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1634454 .
This is the duplicate. Closing appropriately. *** This bug has been marked as a duplicate of bug 1634454 ***