Bug 1163736 - [USS]: Need defined rules for snapshot-directory, setting to a/b works but in linux a/b is b is subdirectory of a
Summary: [USS]: Need defined rules for snapshot-directory, setting to a/b works but in...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Vijaikumar Mallikarjuna
QA Contact: Rahul Hinduja
URL:
Whiteboard: USS
Depends On:
Blocks: 1166590 1168819 1304282 1305868
TreeView+ depends on / blocked
 
Reported: 2014-11-13 11:35 UTC by Rahul Hinduja
Modified: 2016-09-17 12:56 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1168819 1304282 (view as bug list)
Environment:
Last Closed: 2016-02-03 09:05:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Rahul Hinduja 2014-11-13 11:35:21 UTC
Description of problem:
=======================

Currently there are no rules set for creation of snap directory for USS, which allows to set the snapshot-directory of a volume to a/b. But from client if you access a/b it is going to return failure since for linux a/b means b is subdirectory of a.

[root@inception ~]# gluster volume set vol0 snapshot-directory a/b
volume set: success
[root@inception ~]# gluster v i vol0 | grep snapshot-directory
features.snapshot-directory: a/b
[root@inception ~]# 

Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.6.0.32-1.el6rhs.x86_64


How reproducible:
=================

always


Steps to Reproduce:
===================
1. Create a 4 node cluster
2. Create a volume (2*2)
3. Enable USS
4. Set the snapshot-directory to a/b


Actual results:
===============

Setting the snapshot-directory works, though accessing from client is bound to fail


Expected results:
=================

Setting the snapshot-directory should fail with usage

Comment 2 senaik 2014-11-27 12:39:23 UTC
Version :glusterfs 3.6.0.34
========
To add a few more similar scenarios to what is mentioned in the 'Description'

Setting negative values for snapshot-directory is successful , but accessing from client fails: 

 gluster v set vol2 snapshot-directory -4
volume set: success
[root@snapshot13 ~]# gluster v i vol2
 
Volume Name: vol2
Type: Distributed-Replicate
Volume ID: d4881929-339c-4493-b7b9-2ef6957ed444
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: snapshot13.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick2: snapshot14.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick3: snapshot15.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick4: snapshot16.lab.eng.blr.redhat.com:/rhs/brick3/b3
Options Reconfigured:
features.snapshot-directory: -4
features.barrier: disable
features.uss: enable
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256


[root@dhcp-0-97 vol2_fuse]# cd -4
bash: cd: -4: invalid option
cd: usage: cd [-L|-P] [dir]


Setting .. also works but accessing from client fails as in linux it takes it one directory back 

[root@snapshot13 ~]# gluster v set vol2 snapshot-directory ..
volume set: success
You have new mail in /var/spool/mail/root
[root@snapshot13 ~]# gluster v i vol2
 
Volume Name: vol2
Type: Distributed-Replicate
Volume ID: d4881929-339c-4493-b7b9-2ef6957ed444
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: snapshot13.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick2: snapshot14.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick3: snapshot15.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick4: snapshot16.lab.eng.blr.redhat.com:/rhs/brick3/b3
Options Reconfigured:
features.snapshot-directory: ..
features.barrier: disable
features.uss: enable
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

[root@dhcp-0-97 ,]# cd /mnt/vol2_fuse/
[root@dhcp-0-97 vol2_fuse]# cd ..
[root@dhcp-0-97 mnt]#

Comment 3 Vijaikumar Mallikarjuna 2016-02-03 09:05:55 UTC
This issue will be fixed in 3.1.


Note You need to log in before you can comment on or make changes to this bug.