Bug 1304351

Summary: FUSE mount doesn't list any data after snap-dir change and uss enable.
Product: Red Hat Gluster Storage Reporter: Shashank Raj <sraj>
Component: snapshotAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WONTFIX QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: mzywusko, rhs-bugs
Target Milestone: ---Keywords: Reopened, ZStream
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-16 16:04:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Shashank Raj 2016-02-03 11:57:15 UTC
Description of problem:
FUSE mount doesn't list any data after snap-dir change and uss enable.

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-18

How reproducible:
Always

Steps to Reproduce:
1. Create 4 node cluster
2. Create and start a tiered volume and enable quota on the volume.
3. FUSE mount the volume on a client

10.70.35.228:tiervolume   187660800  304384 187356416   1% /mnt/glusterfs

4. Create data from fuse (cp -rf /etc .)

[root@dhcp35-63 glusterfs]# ls
etc


5. On the mount point, create .snaps directory and create some files and directories under .snaps

[root@dhcp35-63 glusterfs]# ls -a
.  ..  etc  .snaps  .trashcan

[root@dhcp35-63 .snaps]# ls
r1  r2  r3  r4  r5  raj1  raj2  raj3  raj4  raj5

6. Create multiple snapshots of volume (snap1 and snap2) and activate it
7. Bring down node2 (poweroff) and bring glusterd down on node4
8. Do a ls on the mount point

[root@dhcp35-63 glusterfs]# ls -a
.  ..  etc  .snaps  .trashcan

9. Change the working directory for USS using from .snaps to shashank using
gluster volume set vol_test snapshot-directory shashank

10. Do a ls on the mount point

[root@dhcp35-63 glusterfs]# ls -a
.  ..  etc  .snaps  .trashcan


11. Enable the USS for volume.
12. Do a ls and observe that it shows nothing. As soon as we enable uss, the data is not seen under the mount point.

[root@dhcp35-63 glusterfs]# ls
[root@dhcp35-63 glusterfs]# ls
[root@dhcp35-63 glusterfs]# ls -a
[root@dhcp35-63 glusterfs]# ls -lrt
total 0

Actual results:
Created data is not seen once we change snapshot directory and enable uss on a tiered volume.

Expected results:


Additional info:
Once we start glusterd on node4, we are able to see the previously created data.

Comment 2 Avra Sengupta 2016-02-29 08:25:02 UTC
Removing devel_ack, as this is not priority right now.

Comment 3 RHEL Program Management 2016-02-29 08:36:03 UTC
Development Management has reviewed and declined this request.
You may appeal this decision by reopening this request.