Description of problem: Already used brick is shown in the add bricks dialog with different mount point after creating gluster volume snapshot. Version-Release number of selected component (if applicable): rhsc-3.1.0-0.58.master.el6 How reproducible: Always Steps to Reproduce: 1. Create a Brick using RHGSC 2. Create a volume using the brick created in step 1 3. Take snapshot of the volume 4. Sync Storage devices 5. Add New Volume 6. There will be a new brick directory listed in the available bricks with mount point like /run/gluster/snaps/* Actual results: Already used brick is shown with the different mount point in add bricks dialog. Expected results: Already used brick should not be shown in the add bricks dialog. Additional info: This issue happens because of an issue with blivet. Blivet gives the mount of the lvmsnapshot as the mount point for lv.
Please provide fuller details of the bug. In parcticular, can you give an example with snapshot volume, assorted lvs and mountpoints, and so forth to demonstrate what you are observing? It is difficult for me to connect the story about the bricks with the story about blivet. Please also attach blivet log.
Created attachment 1038233 [details] Program to test the mount point issue in blivet
Assume I have have a Thin LV with name brick1 and mount at /gluster-bricks/brick1. I create a snapshot of it with name test-snap and mount at '/snap-mount/test-snap'. Now when I read the device list using blivet, it gives the mount point '/snap-mount/test-snap' for both the lv and lvmsnapshot. See the following example: [root@dhcp43-53 ~]# df -ah Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg--brick1-brick1 5.0G 33M 5.0G 1% /gluster-bricks/brick1 /dev/mapper/vg--brick1-a4af8915264c4130b3f7b7e7f63411d9_0 5.0G 33M 5.0G 1% /snap-mounts/brick1 [root@dhcp43-53 ~]# python blivet-mount-issue-test.py name : rhel_dhcp43-53-root type : lvmlv mount point : / name : vda1 type : partition mount point : /boot name : vg-brick1-a4af8915264c4130b3f7b7e7f63411d9_0 type : lvmthinsnapshot mount point : /snap-mounts/brick1 name : vg-brick1-brick1 type : lvmthinlv mount point : /snap-mounts/brick1 [root@dhcp43-53 ~]# U can see from the above out put that though LV and snapshot is mounted at different location, blivet gives the same mount for both devices. Note: I have attached the program 'blivet-mount-issue-test.py' for testing purpose.
Created attachment 1038234 [details] Blivet log blivet log
Created attachment 1038235 [details] blivet-mount-issue-test.py
Impact - used brick's mount point is changed and shown as available (the snapshot mount point is shown as available in the Add brick dialog). May lead to data corruption if user goes with UI recommendation
Please add doc text for this known issue.
(In reply to Ramesh N from comment #6) > Created attachment 1038235 [details] > blivet-mount-issue-test.py Please extend the Python script so that it displays more complete info about the device formats. print("name: %s", device.name) print("format: %r, _mountpoint: %s" % (device.format, device.format._mountpoint)) should do it. Filesystem formats acknowledge two different sorts of mountpoints, the mountpoint on which they are supposed to be mounted and the mountpoint on which they are actually mounted. _mountpoint is the value on which they are actually mounted. In this case, these should be the same for each lv, but if they are different, that information will be useful. Also, we want to rule out the possibility that the snapshot and its origin are sharing the same format object. We can find out by looking at the ids of the format objects, which are unique and are displayed by '%r'. Thanks!
[root@dhcp43-53 ~]# ./blivet-mount-issue-test.py name: rhel_dhcp43-53-root, type : lvmlv format: XFS instance (0x2db8490) object id 30-- type = xfs name = xfs status = True device = /dev/mapper/rhel_dhcp43--53-root uuid = 8da8b8c9-7940-4fa3-b055-7c814201f094 exists = True options = rw,seclabel,relatime,attr2,inode64,noquota supported = True formattable = True resizable = False mountpoint = / mountopts = rw,seclabel,relatime,attr2,inode64,noquota label = None size = 0 B targetSize = 0 B , _mountpoint: / name: vda1, type : partition format: XFS instance (0x2db84d0) object id 13-- type = xfs name = xfs status = True device = /dev/vda1 uuid = a2a677cb-efd9-442b-a9bd-335e59cf5b70 exists = True options = rw,seclabel,relatime,attr2,inode64,noquota supported = True formattable = True resizable = False mountpoint = /boot mountopts = rw,seclabel,relatime,attr2,inode64,noquota label = None size = 0 B targetSize = 0 B , _mountpoint: /boot name: vg-brick1-a4af8915264c4130b3f7b7e7f63411d9_0, type : lvmthinsnapshot format: DeviceFormat instance (0x2dd3610) object id 49-- type = None name = Unknown status = False device = /dev/mapper/vg--brick1-brick1 uuid = None exists = True options = None supported = False formattable = False resizable = False , _mountpoint: /snap-mounts/brick1 name: vg-brick1-brick1, type : lvmthinlv format: DeviceFormat instance (0x2dd3610) object id 49-- type = None name = Unknown status = False device = /dev/mapper/vg--brick1-brick1 uuid = None exists = True options = None supported = False formattable = False resizable = False , _mountpoint: /snap-mounts/brick1 [root@dhcp43-53 ~]#
Created attachment 1040265 [details] blivet-mount-issue-test.py Updated the test script with the requires changes.
Confirmed that the snapshot and origin are sharing a DeviceFormat object.
dlehman, can you suggest a work-around?
Created attachment 1041038 [details] blivet patch
Hi! I've just attached a patch appropriate for RHEL6 version of blivet. Please apply patch to installed blivet and rerun previous test. Please upload logs and show results of script as formerly. Thanks!
I am able to verify the patch attached. It works. Now mounts are returned correctly for LVs which has mounted snapshots. [root@dhcp43-173 /]# ./blivet-mount-issue-test.py name: vda1, type : partition format: Ext4FS instance (0x2583f90) object id 13-- type = ext4 name = ext4 status = True device = /dev/vda1 uuid = 535763cb-183e-48a7-86b2-068418be774f exists = True options = rw,seclabel,relatime,barrier=1,data=ordered supported = True formattable = True resizable = False mountpoint = /boot mountopts = rw,seclabel,relatime,barrier=1,data=ordered label = None size = 0 B targetSize = 0 B , _mountpoint: /boot name: vg-brick1-186ee6da072f4edaa2e0c29a554f6715_0, type : lvmthinsnapshot format: XFS instance (0x25f4a90) object id 53-- type = xfs name = xfs status = True device = /dev/mapper/vg--brick1-186ee6da072f4edaa2e0c29a554f6715_0 uuid = d8dfd734-1b95-450a-9038-8d637fff5ca5 exists = True options = rw,seclabel,relatime,nouuid,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota supported = True formattable = True resizable = False mountpoint = /var/run/gluster/snaps/186ee6da072f4edaa2e0c29a554f6715/brick1 mountopts = rw,seclabel,relatime,nouuid,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota label = None size = 0 B targetSize = 0 B , _mountpoint: /var/run/gluster/snaps/186ee6da072f4edaa2e0c29a554f6715/brick1 name: vg-brick1-brick1, type : lvmthinlv format: XFS instance (0x25f4290) object id 49-- type = xfs name = xfs status = True device = /dev/mapper/vg--brick1-brick1 uuid = d8dfd734-1b95-450a-9038-8d637fff5ca5 exists = True options = rw,seclabel,relatime,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota supported = True formattable = True resizable = False mountpoint = /gluster-bricks/brick1 mountopts = rw,seclabel,relatime,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota label = None size = 0 B targetSize = 0 B , _mountpoint: /gluster-bricks/brick1 name: vg_dhcp43173-lv_root, type : lvmlv format: Ext4FS instance (0x2583fd0) object id 25-- type = ext4 name = ext4 status = True device = /dev/mapper/vg_dhcp43173-lv_root uuid = 680602d4-c47a-43d7-9541-113685680d04 exists = True options = rw,seclabel,relatime,barrier=1,data=ordered supported = True formattable = True resizable = False mountpoint = / mountopts = rw,seclabel,relatime,barrier=1,data=ordered label = None size = 0 B targetSize = 0 B , _mountpoint: / [root@dhcp43-173 /]#
*** Bug 1235415 has been marked as a duplicate of this bug. ***
Moving to modified as the dependent bugs are in modified state.
(In reply to Ramesh N from comment #0) ~~~ [...] Description of problem: Already used brick is shown in the add bricks dialog with different mount point after creating gluster volume snapshot. [...] Expected results: Already used brick should not be shown in the add bricks dialog. [...] ~~~ rhsc-3.1.0-0.62.el6.noarch On RHEL6 ======== python-blivet-1.0.0.2-1.el6rhs.noarch glusterfs-3.7.1-7.el6rhs.x86_64 After the fix, already used bricks (with snapshots) are shown with the same mount point. On RHEL7 ======== python-blivet-0.61.0.26-1.el7.noarch glusterfs-3.7.1-7.el7rhgs.x86_64 Nothing was fixed, Already used brick is shown in the add bricks dialog with different mount point after creating gluster volume snapshot. Please add Fixed In Version. --> ASSIGNED
It should be fixed with python-blivet-1.0.0.2-1.el6rhs.noarch in RHEL-6. Can you sync the storage devices again and test. For RHEL-7, we need the RHEL7.1 Z Stream bz#1236988 to be fixed. Let me keep it in post until RHEL7 bug also fixed.
(In reply to Ramesh N from comment #22) > It should be fixed with python-blivet-1.0.0.2-1.el6rhs.noarch in RHEL-6. Can > you sync the storage devices again and test. Doesn't help, still the same. > For RHEL-7, we need the RHEL7.1 > Z Stream bz#1236988 to be fixed. Let me keep it in post until RHEL7 bug also > fixed.
Moving this bug to ON_QA as the respective RHEL 7 bug is fixed in RHEL 7.2. Note: After snapshot restore, old brick will be shown as free brick and it will be listed in the add bricks dialog. Technically its correct as the user can create new volume with those bricks.
Retested today with scenario from Comment 0. Issues found during retesting: Comment 24 Bug 1242129 Bug 1242128 --> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2015-1494.html