Description of problem:
When creating a voulme snapshot with the 'gluster snapshot create <snapname> <volname(s)>" command, the underlying LV snap is automatically mounted under /var/run/gluster/snaps/<UUID>/brick<n>. However, this mount does not inherit the mount options for the original brick acting as the parent for the snap.
If the snap is restored, this could lead to performance degredations, functional limitations, or in extreme scenarios even potential data loss.
Version-Release number of selected component (if applicable):
# rpm -qa |grep -i glusterfs
Steps to Reproduce:
1. Create a volume snapshot with 'gluster snapshot create <snapname> <volname(s)>'
2. Compare the mount options of the parent brick and the snap in the output of the 'mount' command
Snapshot is mounted with only the 'rw' mount option.
Snapshot is mounted with the same mount options specified for the parent brick.
# gluster volume info
Volume Name: rep01
Volume ID: 4b7bf419-2681-46a7-a04f-cf57f81345be
Snap Volume: no
Number of Bricks: 1 x 2 = 2
# grep datavg /etc/fstab
/dev/mapper/datavg-rhsdata /rhs/storage1 xfs inode64,noatime 1 2
# gluster snapshot create rep01-snap01 rep01
snapshot create: success: Snap rep01-snap01 created successfully
# lvs | grep datavg
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
e326a70445d34fd0b34e803837df38ce_0 datavg Vwi-aotz-- 1.81g rhsthinpool rhsdata 1.68
rhsdata datavg Vwi-aotz-- 1.81g rhsthinpool 1.68
rhsthinpool datavg twi-a-tz-- 1.81g 2.25
# mount | grep datavg
/dev/mapper/datavg-rhsdata on /rhs/storage1 type xfs (rw,noatime,inode64)
/dev/mapper/datavg-e326a70445d34fd0b34e803837df38ce_0 on /var/run/gluster/snaps/e326a70445d34fd0b34e803837df38ce/brick1 type xfs (rw)
Patch http://review.gluster.org/#/c/8394 submitted upstream
We should take it in but not delay build 26 for this one.
Fix at https://code.engineering.redhat.com/gerrit/30209
note: Snapshot bricks are mounted with an additional option of (nouuid). Therefore even if the original brick does not have "nouuid" option the snap brick will have this option.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.