Description of problem: Device formatted with btrfs. Without using wipefs, gdeploy hangs indefinitely at pv creation with no error. Once wipefs was used to remove the filesystem signature, gdeploy worked without issue. Version-Release number of selected component (if applicable): RHEL7, gdeploy 2.0 How reproducible: Consistently
This bug is based of of Red Hat support case #01674530
https://github.com/gluster/gdeploy/commit/b36d268 fixes the issue.
pvcreation is failing with gdeploy when the device is formatted with btrfs filesystem Steps: 1.Format device with btrfs file system #mkfs.btrfs /dev/sdb /dev/sdc # btrfs filesystem show Label: none uuid: 7860bda1-5e7a-479c-b0af-f481ba8a14ff Total devices 2 FS bytes used 112.00KiB devid 1 size 5.00GiB used 1.53GiB path /dev/sdb devid 2 size 5.00GiB used 1.51GiB path /dev/sdc 2. Run Gdeploy script to create pv,vg Observation: Pvcreation fails with error : failed: [10.70.37.97] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": "WARNING: btrfs signature detected on /dev/sdb at offset 65600. Wipe it? [y/n]: n\n Aborted wiping of btrfs.\n 1 existing signature left on the device.\n Aborting pvcreate on /dev/sdb.\n", "rc": 5} # rpm -qa | grep gluster gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64 glusterfs-server-3.8.4-2.el7rhgs.x86_64 vdsm-gluster-4.17.33-1.el7rhgs.noarch gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64 glusterfs-3.8.4-2.el7rhgs.x86_64 glusterfs-api-3.8.4-2.el7rhgs.x86_64 glusterfs-cli-3.8.4-2.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64 glusterfs-libs-3.8.4-2.el7rhgs.x86_64 glusterfs-fuse-3.8.4-2.el7rhgs.x86_64 python-gluster-3.8.4-2.el7rhgs.noarch # rpm -qa | grep gdeploy gdeploy-2.0.1-2.el7rhgs.noarch Attaching the gdeploy conf file and its output.
Created attachment 1211302 [details] Gdeploy conf file
Created attachment 1211303 [details] Gdeploy output
Manisha, since wiping filesystem signature is a risky task. gdeploy by default does not wipe the filesystem signature. wipefs=yes should be set in [backend-setup] or [pv] sections. For example: [hosts] 10.70.42.166 10.70.41.241 [backend-setup] devices=vdb wipefs=yes Should be used while using with btrfs. If `wipefs' is left out, it is taken as `no'. Your configuration should be: [hosts] 10.70.37.202 10.70.37.97 [backend-setup] devices=sdb,sdc vgs=vg1,vg2 pools=pool1,pool2 lvs=lv1,lv2 wipefs=yes mountpoints=/mnt/data1,/mnt/data2 brick_dirs=/mnt/data1/1,/mnt/data2/2 [volume] action=create volname=vol1 replica=yes replica_count=2 force=yes [clients] action=mount volname=vol1 hosts=10.70.37.137 fstype=glusterfs client_mount_points=/mnt/gg1/
With setting wipefs=yes option in config file, wiping of btrfs filesystem signature and PV creation is successfull. Hence marking this bug as Verified. # rpm -qa | grep gluster gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64 glusterfs-server-3.8.4-2.el7rhgs.x86_64 vdsm-gluster-4.17.33-1.el7rhgs.noarch gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64 glusterfs-3.8.4-2.el7rhgs.x86_64 glusterfs-api-3.8.4-2.el7rhgs.x86_64 glusterfs-cli-3.8.4-2.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64 glusterfs-libs-3.8.4-2.el7rhgs.x86_64 glusterfs-fuse-3.8.4-2.el7rhgs.x86_64 python-gluster-3.8.4-2.el7rhgs.noarch # rpm -qa | grep gdeploy gdeploy-2.0.1-2.el7rhgs.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0260.html