Bug 1304351 - FUSE mount doesn't list any data after snap-dir change and uss enable.
FUSE mount doesn't list any data after snap-dir change and uss enable.
Status: NEW
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot (Show other bugs)
3.1
All Linux
unspecified Severity high
: ---
: ---
Assigned To: rjoseph
storage-qa-internal@redhat.com
: Reopened, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-03 06:57 EST by Shashank Raj
Modified: 2017-03-25 12:26 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-02-29 03:36:03 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shashank Raj 2016-02-03 06:57:15 EST
Description of problem:
FUSE mount doesn't list any data after snap-dir change and uss enable.

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-18

How reproducible:
Always

Steps to Reproduce:
1. Create 4 node cluster
2. Create and start a tiered volume and enable quota on the volume.
3. FUSE mount the volume on a client

10.70.35.228:tiervolume   187660800  304384 187356416   1% /mnt/glusterfs

4. Create data from fuse (cp -rf /etc .)

[root@dhcp35-63 glusterfs]# ls
etc


5. On the mount point, create .snaps directory and create some files and directories under .snaps

[root@dhcp35-63 glusterfs]# ls -a
.  ..  etc  .snaps  .trashcan

[root@dhcp35-63 .snaps]# ls
r1  r2  r3  r4  r5  raj1  raj2  raj3  raj4  raj5

6. Create multiple snapshots of volume (snap1 and snap2) and activate it
7. Bring down node2 (poweroff) and bring glusterd down on node4
8. Do a ls on the mount point

[root@dhcp35-63 glusterfs]# ls -a
.  ..  etc  .snaps  .trashcan

9. Change the working directory for USS using from .snaps to shashank using
gluster volume set vol_test snapshot-directory shashank

10. Do a ls on the mount point

[root@dhcp35-63 glusterfs]# ls -a
.  ..  etc  .snaps  .trashcan


11. Enable the USS for volume.
12. Do a ls and observe that it shows nothing. As soon as we enable uss, the data is not seen under the mount point.

[root@dhcp35-63 glusterfs]# ls
[root@dhcp35-63 glusterfs]# ls
[root@dhcp35-63 glusterfs]# ls -a
[root@dhcp35-63 glusterfs]# ls -lrt
total 0

Actual results:
Created data is not seen once we change snapshot directory and enable uss on a tiered volume.

Expected results:


Additional info:
Once we start glusterd on node4, we are able to see the previously created data.
Comment 2 Avra Sengupta 2016-02-29 03:25:02 EST
Removing devel_ack, as this is not priority right now.
Comment 3 RHEL Product and Program Management 2016-02-29 03:36:03 EST
Development Management has reviewed and declined this request.
You may appeal this decision by reopening this request.

Note You need to log in before you can comment on or make changes to this bug.