Description of problem: ======================= Currently there are no rules set for creation of snap directory for USS, which allows to set the snapshot-directory of a volume to a/b. But from client if you access a/b it is going to return failure since for linux a/b means b is subdirectory of a. [root@inception ~]# gluster volume set vol0 snapshot-directory a/b volume set: success [root@inception ~]# gluster v i vol0 | grep snapshot-directory features.snapshot-directory: a/b [root@inception ~]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.6.0.32-1.el6rhs.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. Create a 4 node cluster 2. Create a volume (2*2) 3. Enable USS 4. Set the snapshot-directory to a/b Actual results: =============== Setting the snapshot-directory works, though accessing from client is bound to fail Expected results: ================= Setting the snapshot-directory should fail with usage
Version :glusterfs 3.6.0.34 ======== To add a few more similar scenarios to what is mentioned in the 'Description' Setting negative values for snapshot-directory is successful , but accessing from client fails: gluster v set vol2 snapshot-directory -4 volume set: success [root@snapshot13 ~]# gluster v i vol2 Volume Name: vol2 Type: Distributed-Replicate Volume ID: d4881929-339c-4493-b7b9-2ef6957ed444 Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: snapshot13.lab.eng.blr.redhat.com:/rhs/brick3/b3 Brick2: snapshot14.lab.eng.blr.redhat.com:/rhs/brick3/b3 Brick3: snapshot15.lab.eng.blr.redhat.com:/rhs/brick3/b3 Brick4: snapshot16.lab.eng.blr.redhat.com:/rhs/brick3/b3 Options Reconfigured: features.snapshot-directory: -4 features.barrier: disable features.uss: enable performance.readdir-ahead: on auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 [root@dhcp-0-97 vol2_fuse]# cd -4 bash: cd: -4: invalid option cd: usage: cd [-L|-P] [dir] Setting .. also works but accessing from client fails as in linux it takes it one directory back [root@snapshot13 ~]# gluster v set vol2 snapshot-directory .. volume set: success You have new mail in /var/spool/mail/root [root@snapshot13 ~]# gluster v i vol2 Volume Name: vol2 Type: Distributed-Replicate Volume ID: d4881929-339c-4493-b7b9-2ef6957ed444 Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: snapshot13.lab.eng.blr.redhat.com:/rhs/brick3/b3 Brick2: snapshot14.lab.eng.blr.redhat.com:/rhs/brick3/b3 Brick3: snapshot15.lab.eng.blr.redhat.com:/rhs/brick3/b3 Brick4: snapshot16.lab.eng.blr.redhat.com:/rhs/brick3/b3 Options Reconfigured: features.snapshot-directory: .. features.barrier: disable features.uss: enable performance.readdir-ahead: on auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 [root@dhcp-0-97 ,]# cd /mnt/vol2_fuse/ [root@dhcp-0-97 vol2_fuse]# cd .. [root@dhcp-0-97 mnt]#
This issue will be fixed in 3.1.