Bug 1360461
Summary: | gdeploy hangs if device has a filesystem signature | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Cal Calhoun <ccalhoun> | ||||||
Component: | gdeploy | Assignee: | Sachidananda Urs <surs> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | ||||||
Severity: | low | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | rhgs-3.1 | CC: | jliedy, rcyriac, rhinduja, smohan | ||||||
Target Milestone: | --- | Keywords: | ZStream | ||||||
Target Release: | RHGS 3.1.3 Async | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | gdeploy-2.0.1-1 | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2017-02-07 11:34:07 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1351522 | ||||||||
Attachments: |
|
Description
Cal Calhoun
2016-07-26 19:56:50 UTC
This bug is based of of Red Hat support case #01674530 https://github.com/gluster/gdeploy/commit/b36d268 fixes the issue. pvcreation is failing with gdeploy when the device is formatted with btrfs filesystem Steps: 1.Format device with btrfs file system #mkfs.btrfs /dev/sdb /dev/sdc # btrfs filesystem show Label: none uuid: 7860bda1-5e7a-479c-b0af-f481ba8a14ff Total devices 2 FS bytes used 112.00KiB devid 1 size 5.00GiB used 1.53GiB path /dev/sdb devid 2 size 5.00GiB used 1.51GiB path /dev/sdc 2. Run Gdeploy script to create pv,vg Observation: Pvcreation fails with error : failed: [10.70.37.97] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": "WARNING: btrfs signature detected on /dev/sdb at offset 65600. Wipe it? [y/n]: n\n Aborted wiping of btrfs.\n 1 existing signature left on the device.\n Aborting pvcreate on /dev/sdb.\n", "rc": 5} # rpm -qa | grep gluster gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64 glusterfs-server-3.8.4-2.el7rhgs.x86_64 vdsm-gluster-4.17.33-1.el7rhgs.noarch gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64 glusterfs-3.8.4-2.el7rhgs.x86_64 glusterfs-api-3.8.4-2.el7rhgs.x86_64 glusterfs-cli-3.8.4-2.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64 glusterfs-libs-3.8.4-2.el7rhgs.x86_64 glusterfs-fuse-3.8.4-2.el7rhgs.x86_64 python-gluster-3.8.4-2.el7rhgs.noarch # rpm -qa | grep gdeploy gdeploy-2.0.1-2.el7rhgs.noarch Attaching the gdeploy conf file and its output. Created attachment 1211302 [details]
Gdeploy conf file
Created attachment 1211303 [details]
Gdeploy output
Manisha, since wiping filesystem signature is a risky task. gdeploy by default does not wipe the filesystem signature. wipefs=yes should be set in [backend-setup] or [pv] sections. For example: [hosts] 10.70.42.166 10.70.41.241 [backend-setup] devices=vdb wipefs=yes Should be used while using with btrfs. If `wipefs' is left out, it is taken as `no'. Your configuration should be: [hosts] 10.70.37.202 10.70.37.97 [backend-setup] devices=sdb,sdc vgs=vg1,vg2 pools=pool1,pool2 lvs=lv1,lv2 wipefs=yes mountpoints=/mnt/data1,/mnt/data2 brick_dirs=/mnt/data1/1,/mnt/data2/2 [volume] action=create volname=vol1 replica=yes replica_count=2 force=yes [clients] action=mount volname=vol1 hosts=10.70.37.137 fstype=glusterfs client_mount_points=/mnt/gg1/ With setting wipefs=yes option in config file, wiping of btrfs filesystem signature and PV creation is successfull. Hence marking this bug as Verified. # rpm -qa | grep gluster gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64 glusterfs-server-3.8.4-2.el7rhgs.x86_64 vdsm-gluster-4.17.33-1.el7rhgs.noarch gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64 glusterfs-3.8.4-2.el7rhgs.x86_64 glusterfs-api-3.8.4-2.el7rhgs.x86_64 glusterfs-cli-3.8.4-2.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64 glusterfs-libs-3.8.4-2.el7rhgs.x86_64 glusterfs-fuse-3.8.4-2.el7rhgs.x86_64 python-gluster-3.8.4-2.el7rhgs.noarch # rpm -qa | grep gdeploy gdeploy-2.0.1-2.el7rhgs.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0260.html |