Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1360461 - gdeploy hangs if device has a filesystem signature
gdeploy hangs if device has a filesystem signature
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy (Show other bugs)
3.1
Unspecified Linux
unspecified Severity low
: ---
: RHGS 3.1.3 Async
Assigned To: Sachidananda Urs
Manisha Saini
: ZStream
Depends On:
Blocks: 1351522
  Show dependency treegraph
 
Reported: 2016-07-26 15:56 EDT by Cal Calhoun
Modified: 2017-03-07 12:42 EST (History)
4 users (show)

See Also:
Fixed In Version: gdeploy-2.0.1-1
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-02-07 06:34:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Gdeploy conf file (357 bytes, text/plain)
2016-10-17 06:39 EDT, Manisha Saini
no flags Details
Gdeploy output (26.01 KB, text/plain)
2016-10-17 06:39 EDT, Manisha Saini
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0260 normal SHIPPED_LIVE Important: ansible and gdeploy security and bug fix update 2017-02-07 11:32:47 EST

  None (edit)
Description Cal Calhoun 2016-07-26 15:56:50 EDT
Description of problem:

Device formatted with btrfs.

Without using wipefs, gdeploy hangs indefinitely at pv creation with no error.

Once wipefs was used to remove the filesystem signature, gdeploy worked without issue.

Version-Release number of selected component (if applicable):

RHEL7, gdeploy 2.0

How reproducible:

Consistently
Comment 2 Jonathan Liedy 2016-07-27 12:17:57 EDT
This bug is based of of Red Hat support case #01674530
Comment 4 Sachidananda Urs 2016-08-24 12:37:00 EDT
https://github.com/gluster/gdeploy/commit/b36d268 fixes the issue.
Comment 5 Manisha Saini 2016-10-17 06:38:46 EDT
pvcreation is failing with gdeploy when the device is formatted with btrfs filesystem

Steps:
1.Format device with btrfs file system
 #mkfs.btrfs /dev/sdb /dev/sdc

# btrfs filesystem show
Label: none  uuid: 7860bda1-5e7a-479c-b0af-f481ba8a14ff
	Total devices 2 FS bytes used 112.00KiB
	devid    1 size 5.00GiB used 1.53GiB path /dev/sdb
	devid    2 size 5.00GiB used 1.51GiB path /dev/sdc

2. Run Gdeploy script to create pv,vg

Observation:

Pvcreation fails with error :

failed: [10.70.37.97] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": "WARNING: btrfs signature detected on /dev/sdb at offset 65600. Wipe it? [y/n]: n\n  Aborted wiping of btrfs.\n  1 existing signature left on the device.\n  Aborting pvcreate on /dev/sdb.\n", "rc": 5}

# rpm -qa | grep gluster
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch

# rpm -qa | grep gdeploy
gdeploy-2.0.1-2.el7rhgs.noarch

Attaching the gdeploy conf file and its output.
Comment 6 Manisha Saini 2016-10-17 06:39 EDT
Created attachment 1211302 [details]
Gdeploy conf file
Comment 7 Manisha Saini 2016-10-17 06:39 EDT
Created attachment 1211303 [details]
Gdeploy output
Comment 8 Sachidananda Urs 2016-10-18 01:43:37 EDT
Manisha, since wiping filesystem signature is a risky task. gdeploy by default does not wipe the filesystem signature.

wipefs=yes should be set in [backend-setup] or [pv] sections.

For example:

[hosts]
10.70.42.166
10.70.41.241

[backend-setup]
devices=vdb
wipefs=yes

Should be used while using with btrfs. If `wipefs' is left out, it is taken as  `no'.

Your configuration should be:

[hosts]
10.70.37.202
10.70.37.97

[backend-setup]
devices=sdb,sdc
vgs=vg1,vg2
pools=pool1,pool2
lvs=lv1,lv2
wipefs=yes
mountpoints=/mnt/data1,/mnt/data2
brick_dirs=/mnt/data1/1,/mnt/data2/2

[volume]
action=create
volname=vol1
replica=yes
replica_count=2
force=yes

[clients]
action=mount
volname=vol1
hosts=10.70.37.137
fstype=glusterfs
client_mount_points=/mnt/gg1/
Comment 9 Manisha Saini 2016-10-18 02:56:28 EDT
With setting wipefs=yes option in config file, wiping of btrfs filesystem signature and  PV creation is successfull.

Hence marking this bug as Verified.

# rpm -qa | grep gluster
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch

# rpm -qa | grep gdeploy
gdeploy-2.0.1-2.el7rhgs.noarch
Comment 11 errata-xmlrpc 2017-02-07 06:34:07 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0260.html

Note You need to log in before you can comment on or make changes to this bug.