Bug 1230669 - Used brick is shown in add bricks dialog with different mount point after creating snapshot
Summary: Used brick is shown in add bricks dialog with different mount point after cre...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: RHGS 3.1.0
Assignee: Ramesh N
QA Contact: Stanislav Graf
URL:
Whiteboard:
: 1235415 (view as bug list)
Depends On: 1232159 1234454 1236988
Blocks: 1202842
TreeView+ depends on / blocked
 
Reported: 2015-06-11 10:27 UTC by Ramesh N
Modified: 2015-07-29 05:33 UTC (History)
13 users (show)

Fixed In Version: python-blivet-1.0.0.2-1.el6rhs.noarch/python-blivet-0.61.15.9-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 05:33:06 UTC
Embargoed:


Attachments (Terms of Use)
Program to test the mount point issue in blivet (264 bytes, text/plain)
2015-06-13 05:40 UTC, Ramesh N
no flags Details
Blivet log (103.37 KB, text/plain)
2015-06-13 06:15 UTC, Ramesh N
no flags Details
blivet-mount-issue-test.py (319 bytes, text/plain)
2015-06-13 06:16 UTC, Ramesh N
no flags Details
blivet-mount-issue-test.py (1.04 KB, text/x-python)
2015-06-18 06:23 UTC, Ramesh N
no flags Details
blivet patch (5.80 KB, application/mbox)
2015-06-19 17:57 UTC, mulhern
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1232159 0 high CLOSED Incorrect mountpoint for lv with existing snapshot lv 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1242128 0 unspecified CLOSED Deleting a volume should delete also fstab entry 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1242129 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Bugzilla 1242442 0 unspecified CLOSED Restoring volume should change fstab entry 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1242444 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Product Errata RHEA-2015:1494 0 normal SHIPPED_LIVE Red Hat Gluster Storage Console 3.1 Enhancement and bug fixes 2015-07-29 09:24:02 UTC
oVirt gerrit 42424 0 master ABANDONED gluster: fix mount point issue for snapshot thin lv Never


Description Ramesh N 2015-06-11 10:27:12 UTC
Description of problem:
Already used brick is shown in the add bricks dialog with different mount point after creating gluster volume snapshot. 

Version-Release number of selected component (if applicable):

rhsc-3.1.0-0.58.master.el6

How reproducible:

Always

Steps to Reproduce:
1. Create a Brick using RHGSC
2. Create a volume using the brick created in step 1
3. Take snapshot of the volume
4. Sync Storage devices
5. Add New Volume
6. There will be a new brick directory listed in the available bricks with mount point like /run/gluster/snaps/*

Actual results:

  Already used brick is shown with the different mount point in add bricks dialog.
  
Expected results:

Already used brick should not be shown in the add bricks dialog.


Additional info:

 This issue happens because of an issue with blivet. Blivet gives the mount of the lvmsnapshot as the mount point for lv.

Comment 2 mulhern 2015-06-12 13:53:57 UTC
Please provide fuller details of the bug. In parcticular, can you give an example with snapshot volume, assorted lvs and mountpoints, and so forth to demonstrate what you are observing? It is difficult for me to connect the story about the bricks with the story about blivet.

Please also attach blivet log.

Comment 3 Ramesh N 2015-06-13 05:40:37 UTC
Created attachment 1038233 [details]
Program to test the mount point issue in blivet

Comment 4 Ramesh N 2015-06-13 05:41:50 UTC
Assume I have have a Thin LV with name brick1 and mount at /gluster-bricks/brick1. I create a snapshot of it with name 
test-snap and mount at '/snap-mount/test-snap'. Now when I read the device list using blivet, it gives the mount point '/snap-mount/test-snap' for both the lv and lvmsnapshot.

See the following example:
[root@dhcp43-53 ~]# df -ah
Filesystem                                                 Size  Used Avail Use% Mounted on
/dev/mapper/vg--brick1-brick1                              5.0G   33M  5.0G   1% /gluster-bricks/brick1
/dev/mapper/vg--brick1-a4af8915264c4130b3f7b7e7f63411d9_0  5.0G   33M  5.0G   1% /snap-mounts/brick1

[root@dhcp43-53 ~]# python blivet-mount-issue-test.py 
name :  rhel_dhcp43-53-root  type : lvmlv  mount point :  /
name :  vda1  type : partition  mount point :  /boot
name :  vg-brick1-a4af8915264c4130b3f7b7e7f63411d9_0  type : lvmthinsnapshot  mount point :  /snap-mounts/brick1
name :  vg-brick1-brick1  type : lvmthinlv  mount point :  /snap-mounts/brick1
[root@dhcp43-53 ~]# 

U can see from the above out put that though LV and snapshot is mounted at different location, blivet gives the same mount for both devices.

Note: I have attached the program 'blivet-mount-issue-test.py' for testing purpose.

Comment 5 Ramesh N 2015-06-13 06:15:45 UTC
Created attachment 1038234 [details]
Blivet log

blivet log

Comment 6 Ramesh N 2015-06-13 06:16:55 UTC
Created attachment 1038235 [details]
blivet-mount-issue-test.py

Comment 7 Sahina Bose 2015-06-15 08:48:00 UTC
Impact - used brick's mount point is changed and shown as available (the snapshot mount point is shown as available in the Add brick dialog). May lead to data corruption if user goes with UI recommendation

Comment 9 Shalaka 2015-06-16 10:13:00 UTC
Please add doc text for this known issue.

Comment 10 mulhern 2015-06-17 16:45:25 UTC
(In reply to Ramesh N from comment #6)
> Created attachment 1038235 [details]
> blivet-mount-issue-test.py

Please extend the Python script so that it displays more complete info about the device formats.

print("name: %s", device.name)
print("format: %r, _mountpoint: %s" % (device.format, device.format._mountpoint))

should do it.

Filesystem formats acknowledge two different sorts of mountpoints, the mountpoint on which they are supposed to be mounted and the mountpoint on which they are actually mounted. _mountpoint is the value on which they are actually mounted. In this case, these should be the same for each lv, but if they are different, that information will be useful. 

Also, we want to rule out the possibility that the snapshot and its origin are sharing the same format object. We can find out by looking at the ids of the format objects, which are unique and are displayed by '%r'.

Thanks!

Comment 11 Ramesh N 2015-06-18 06:20:50 UTC
[root@dhcp43-53 ~]# ./blivet-mount-issue-test.py 
name: rhel_dhcp43-53-root,  type : lvmlv
format: XFS instance (0x2db8490) object id 30--
  type = xfs  name = xfs  status = True
  device = /dev/mapper/rhel_dhcp43--53-root  uuid = 8da8b8c9-7940-4fa3-b055-7c814201f094  exists = True
  options = rw,seclabel,relatime,attr2,inode64,noquota  supported = True  formattable = True  resizable = False
  mountpoint = /  mountopts = rw,seclabel,relatime,attr2,inode64,noquota
  label = None  size = 0 B  targetSize = 0 B
, _mountpoint: /
name: vda1,  type : partition
format: XFS instance (0x2db84d0) object id 13--
  type = xfs  name = xfs  status = True
  device = /dev/vda1  uuid = a2a677cb-efd9-442b-a9bd-335e59cf5b70  exists = True
  options = rw,seclabel,relatime,attr2,inode64,noquota  supported = True  formattable = True  resizable = False
  mountpoint = /boot  mountopts = rw,seclabel,relatime,attr2,inode64,noquota
  label = None  size = 0 B  targetSize = 0 B
, _mountpoint: /boot
name: vg-brick1-a4af8915264c4130b3f7b7e7f63411d9_0,  type : lvmthinsnapshot
format: DeviceFormat instance (0x2dd3610) object id 49--
  type = None  name = Unknown  status = False
  device = /dev/mapper/vg--brick1-brick1  uuid = None  exists = True
  options = None  supported = False  formattable = False  resizable = False
, _mountpoint: /snap-mounts/brick1
name: vg-brick1-brick1,  type : lvmthinlv
format: DeviceFormat instance (0x2dd3610) object id 49--
  type = None  name = Unknown  status = False
  device = /dev/mapper/vg--brick1-brick1  uuid = None  exists = True
  options = None  supported = False  formattable = False  resizable = False
, _mountpoint: /snap-mounts/brick1
[root@dhcp43-53 ~]#

Comment 12 Ramesh N 2015-06-18 06:23:37 UTC
Created attachment 1040265 [details]
blivet-mount-issue-test.py

Updated the test script with the requires changes.

Comment 13 mulhern 2015-06-18 12:48:18 UTC
Confirmed that the snapshot and origin are sharing a DeviceFormat object.

Comment 14 mulhern 2015-06-18 13:07:01 UTC
dlehman, can you suggest a work-around?

Comment 15 mulhern 2015-06-19 17:57:18 UTC
Created attachment 1041038 [details]
blivet patch

Comment 16 mulhern 2015-06-19 18:00:58 UTC
Hi!

I've just attached a patch appropriate for RHEL6 version of blivet.

Please apply patch to installed blivet and rerun previous test.

Please upload logs and show results of script as formerly.

Thanks!

Comment 17 Ramesh N 2015-06-22 14:17:23 UTC
I am able to verify the patch attached. It works. Now mounts are returned correctly for LVs which has mounted snapshots.

[root@dhcp43-173 /]# ./blivet-mount-issue-test.py 
name: vda1,  type : partition
format: Ext4FS instance (0x2583f90) object id 13--
  type = ext4  name = ext4  status = True
  device = /dev/vda1  uuid = 535763cb-183e-48a7-86b2-068418be774f  exists = True
  options = rw,seclabel,relatime,barrier=1,data=ordered  supported = True  formattable = True  resizable = False
  mountpoint = /boot  mountopts = rw,seclabel,relatime,barrier=1,data=ordered
  label = None  size = 0 B  targetSize = 0 B
, _mountpoint: /boot
name: vg-brick1-186ee6da072f4edaa2e0c29a554f6715_0,  type : lvmthinsnapshot
format: XFS instance (0x25f4a90) object id 53--
  type = xfs  name = xfs  status = True
  device = /dev/mapper/vg--brick1-186ee6da072f4edaa2e0c29a554f6715_0  uuid = d8dfd734-1b95-450a-9038-8d637fff5ca5  exists = True
  options = rw,seclabel,relatime,nouuid,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota  supported = True  formattable = True  resizable = False
  mountpoint = /var/run/gluster/snaps/186ee6da072f4edaa2e0c29a554f6715/brick1  mountopts = rw,seclabel,relatime,nouuid,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota
  label = None  size = 0 B  targetSize = 0 B
, _mountpoint: /var/run/gluster/snaps/186ee6da072f4edaa2e0c29a554f6715/brick1
name: vg-brick1-brick1,  type : lvmthinlv
format: XFS instance (0x25f4290) object id 49--
  type = xfs  name = xfs  status = True
  device = /dev/mapper/vg--brick1-brick1  uuid = d8dfd734-1b95-450a-9038-8d637fff5ca5  exists = True
  options = rw,seclabel,relatime,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota  supported = True  formattable = True  resizable = False
  mountpoint = /gluster-bricks/brick1  mountopts = rw,seclabel,relatime,attr2,delaylog,logbsize=64k,sunit=128,swidth=128,noquota
  label = None  size = 0 B  targetSize = 0 B
, _mountpoint: /gluster-bricks/brick1
name: vg_dhcp43173-lv_root,  type : lvmlv
format: Ext4FS instance (0x2583fd0) object id 25--
  type = ext4  name = ext4  status = True
  device = /dev/mapper/vg_dhcp43173-lv_root  uuid = 680602d4-c47a-43d7-9541-113685680d04  exists = True
  options = rw,seclabel,relatime,barrier=1,data=ordered  supported = True  formattable = True  resizable = False
  mountpoint = /  mountopts = rw,seclabel,relatime,barrier=1,data=ordered
  label = None  size = 0 B  targetSize = 0 B
, _mountpoint: /
[root@dhcp43-173 /]#

Comment 18 Ramesh N 2015-06-25 04:39:33 UTC
*** Bug 1235415 has been marked as a duplicate of this bug. ***

Comment 19 Ramesh N 2015-07-03 10:06:46 UTC
Moving to modified as the dependent bugs are in modified state.

Comment 21 Stanislav Graf 2015-07-07 16:23:12 UTC
(In reply to Ramesh N from comment #0)
~~~
[...]
Description of problem:
Already used brick is shown in the add bricks dialog with different mount point after creating gluster volume snapshot. 
[...]
Expected results:
Already used brick should not be shown in the add bricks dialog.
[...]
~~~

rhsc-3.1.0-0.62.el6.noarch

On RHEL6
========
python-blivet-1.0.0.2-1.el6rhs.noarch
glusterfs-3.7.1-7.el6rhs.x86_64

After the fix, already used bricks (with snapshots) are shown with the same mount point.

On RHEL7
========
python-blivet-0.61.0.26-1.el7.noarch
glusterfs-3.7.1-7.el7rhgs.x86_64

Nothing was fixed, Already used brick is shown in the add bricks dialog with different mount point after creating gluster volume snapshot. 

Please add Fixed In Version.

--> ASSIGNED

Comment 22 Ramesh N 2015-07-08 04:37:44 UTC
It should be fixed with python-blivet-1.0.0.2-1.el6rhs.noarch in RHEL-6. Can you sync the storage devices again and test. For RHEL-7, we need the RHEL7.1 Z Stream bz#1236988 to be fixed. Let me keep it in post until RHEL7 bug also fixed.

Comment 23 Stanislav Graf 2015-07-09 10:37:27 UTC
(In reply to Ramesh N from comment #22)
> It should be fixed with python-blivet-1.0.0.2-1.el6rhs.noarch in RHEL-6. Can
> you sync the storage devices again and test.

Doesn't help, still the same.

> For RHEL-7, we need the RHEL7.1
> Z Stream bz#1236988 to be fixed. Let me keep it in post until RHEL7 bug also
> fixed.

Comment 24 Ramesh N 2015-07-10 09:06:55 UTC
Moving this bug to ON_QA as the respective RHEL 7 bug is fixed in RHEL 7.2. 

Note: After snapshot restore, old brick will be shown as free brick and it will be listed in the add bricks dialog. Technically its correct as the user can create new volume with those bricks.

Comment 26 Stanislav Graf 2015-07-11 09:24:00 UTC
Retested today with scenario from Comment 0.

Issues found during retesting:
Comment 24
Bug 1242129
Bug 1242128

--> VERIFIED

Comment 27 errata-xmlrpc 2015-07-29 05:33:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2015-1494.html


Note You need to log in before you can comment on or make changes to this bug.