Bug 1634763

Summary: Failed to wipe start of new LV
Product: OpenShift Container Platform Reporter: Nicholas Schuetz <nick>
Component: StorageAssignee: Bradley Childs <bchilds>
Status: CLOSED DUPLICATE QA Contact: Jianwei Hou <jhou>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.11.0CC: aos-bugs, aos-storage-staff, jarrpa, nick
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-01 18:48:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
error
none
notfound
none
containerlist
none
ansibleinventory none

Description Nicholas Schuetz 2018-10-01 14:49:52 UTC
Created attachment 1489064 [details]
error

I've noticed CNS being busted on 3.11 puddles for awhile now.  This is v3.11.18. It looks like the VG gets created but not the LVs.

Screenshot and ansible inventory attached.

TASK [openshift_storage_glusterfs : Create heketi DB volume] *********

fatal: [master01.ocp.nicknach.net]: FAILED! => {"changed": true, "cmd": ["oc", "--config=/tmp/openshift-glusterfs-ansible-xMRphY/admin.kubeconfig", "rsh", "--namespace=glusterfs", "deploy-heketi-storage-1-dtkf9", "heketi-cli", "-s", "http://localhost:8080", "--user", "admin", "--secret", "F/rdBQL2+q+sUReEiamoJ7bPq0w/+flBmZmQekzaLxU=", "setup-openshift-heketi-storage", "--image", "repo.home.nicknach.net/rhgs3/rhgs-volmanager-rhel7:latest", "--listfile", "/tmp/heketi-storage.json"], "delta": "0:01:03.272923", "end": "2018-10-01 09:42:27.400457", "msg": "non-zero return code", "rc": 255, "start": "2018-10-01 09:41:24.127534", "stderr": "Error: WARNING: This metadata update is NOT backed up.\n  /dev/mapper/vg_02fcef822844a049c1a01fcc78da8342-lvol0: open failed: No such file or directory\n  /dev/vg_02fcef822844a049c1a01fcc78da8342/lvol0: not found: device not cleared\n  Aborting. Failed to wipe start of new LV.\ncommand terminated with exit code 255", "stderr_lines": ["Error: WARNING: This metadata update is NOT backed up.", "  /dev/mapper/vg_02fcef822844a049c1a01fcc78da8342-lvol0: open failed: No such file or directory", "  /dev/vg_02fcef822844a049c1a01fcc78da8342/lvol0: not found: device not cleared", "  Aborting. Failed to wipe start of new LV.", "command terminated with exit code 255"], "stdout": "", "stdout_lines": []}

Comment 1 Nicholas Schuetz 2018-10-01 14:50:24 UTC
Created attachment 1489065 [details]
notfound

Comment 2 Nicholas Schuetz 2018-10-01 14:51:57 UTC
Created attachment 1489066 [details]
containerlist

Comment 3 Nicholas Schuetz 2018-10-01 14:55:24 UTC
Created attachment 1489069 [details]
ansibleinventory

Comment 4 Nicholas Schuetz 2018-10-01 15:00:19 UTC
Here's the full error from the deploy pod:


[kubeexec] ERROR 2018/10/01 14:41:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --
thin vg_02fcef822844a049c1a01fcc78da8342/tp_fed65e0f23af57d2b539502addc85599 --virtualsize 2097152K --name brick_fed65e0f23af57d2b539502addc85599] on glusterfs-storage-n46w7: Err[command terminated with exit cod
e 5]: Stdout [  Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
]: Stderr [  WARNING: This metadata update is NOT backed up.
  /dev/mapper/vg_02fcef822844a049c1a01fcc78da8342-lvol0: open failed: No such file or directory
  /dev/vg_02fcef822844a049c1a01fcc78da8342/lvol0: not found: device not cleared
  Aborting. Failed to wipe start of new LV.

Comment 5 Jose A. Rivera 2018-10-01 18:19:56 UTC
Is this on CRI-O?

Comment 6 Jose A. Rivera 2018-10-01 18:21:23 UTC
Oops, missed the attachment. It is on CRI-O. This is likely a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1634454 .

Comment 7 Jose A. Rivera 2018-10-01 18:48:42 UTC
This is the duplicate. Closing appropriately.

*** This bug has been marked as a duplicate of bug 1634454 ***