Description of problem: ======================= If original bricks are mounted with some xfs options and snapshot is taken than the snapshot bricks inherit its parent brick properties. But when a node is rebooted, the snapshot bricks are not automount to the system making it unusable. For example: ============= Original volume brick: [root@rhs-arch-srv2 ~]# mount | grep brick2 /dev/mapper/RHS_vg2-RHS_lv2 on /rhs/brick2 type xfs (rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc) [root@rhs-arch-srv2 ~]# Snapshot brick: [root@rhs-arch-srv2 ~]# mount | grep /var/run /dev/mapper/RHS_vg2-8e87d5ac707343c4b42ea779befc53b2_0 on /var/run/gluster/snaps/8e87d5ac707343c4b42ea779befc53b2/brick2 type xfs (rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc,nouuid) Reboot the node: ================ [root@rhs-arch-srv2 ~]# reboot Broadcast message from root.eng.blr.redhat.com (/dev/pts/0) at 14:33 ... The system is going down for reboot NOW! [root@rhs-arch-srv2 ~]# Connection to rhs-arch-srv2.lab.eng.blr.redhat.com closed by remote host. Connection to rhs-arch-srv2.lab.eng.blr.redhat.com closed. [rhinduja@rhinduja ~]$ Once the node is up check for snapshoted brick: =============================================== [root@rhs-arch-srv2 ~]# mount | grep /var/run [root@rhs-arch-srv2 ~]# LV still exists: ================= [root@rhs-arch-srv2 ~]# lvs | grep 8e87d5ac707343c4b42e 8e87d5ac707343c4b42ea779befc53b2_0 RHS_vg2 Vwi-a-tz-- 1.90t RHS_pool2 RHS_lv2 0.05 [root@rhs-arch-srv2 ~]# Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.6.0.27-1.el6rhs.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. Create thin LV's and mount them with following options: rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc 2. Create the volume with above mentioned brick 3. Create a snapshot of the volume 4. Check the snapshoted brick attributes, it should inherit the parent brick attributes + nouuid 5. Reboot the machine Actual results: =============== Snapshoted bricks are not remounted upon reboot Expected results: ================= Once the node comes back the snapshoted bricks should also remount. It does remount when original bricks are mounted with default options and their snapshot also inherits default options.
We are persisting the mount options in the brickinfo file as below: root@rh1:~/workspace/git/glusterfs # cat /var/lib/glusterd/snaps/snap1/6d6c402060d24601901d1fdd8312a7e4/bricks/rh1\:-var-run-gluster-snaps-6d6c402060d24601901d1fdd8312a7e4-brick1-dir | grep 'mnt-opts' mnt-opts=rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc When glusterd parsing this file it reads mount options as below which is not a valid as allocsize requires a value: mnt-opts=rw,noatime,allocsize From the next '=' in the line everything is ignored.
Patch posted upstream
Patch submitted: https://code.engineering.redhat.com/gerrit/#/c/31226/
Verified with the build: glusterfs-3.6.0.28-1.el6rhs.x86_64 Before reboot: ============== [root@inception ~]# cat before_reboot* /dev/mapper/RHS_vg10-RHS_lv10 on /rhs/brick10 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/RHS_vg1-RHS_lv1 on /rhs/brick1 type xfs (rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc) /dev/mapper/RHS_vg1-2aad9ef4dc7244b0a9f3ddd90c13318e_0 on /var/run/gluster/snaps/2aad9ef4dc7244b0a9f3ddd90c13318e/brick1 type xfs (rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc,nouuid) [root@inception ~]# [root@inception ~]# cat after_reboot* /dev/mapper/RHS_vg1-RHS_lv1 on /rhs/brick1 type xfs (rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc) /dev/mapper/RHS_vg10-RHS_lv10 on /rhs/brick10 type xfs (rw,noatime,nodiratime,inode64) /dev/mapper/RHS_vg1-2aad9ef4dc7244b0a9f3ddd90c13318e_0 on /var/run/gluster/snaps/2aad9ef4dc7244b0a9f3ddd90c13318e/brick1 type xfs (rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc,nouuid) [root@inception ~]# Brick is mounter after reboot: [root@inception ~]# mount | grep /var/run /dev/mapper/RHS_vg1-2aad9ef4dc7244b0a9f3ddd90c13318e_0 on /var/run/gluster/snaps/2aad9ef4dc7244b0a9f3ddd90c13318e/brick1 type xfs (rw,noatime,allocsize=1MiB,noattr2,barrier,nogrpid,ihashsize=0,noikeep,inode64,largeio,logbufs=4,noalign,nouuid,osyncisosync,quota,gquota,swalloc,nouuid) [root@inception ~]# Moving the bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html