Bug 1303298 - While browsing the snapshot using USS files are getting displayed as link files
While browsing the snapshot using USS files are getting displayed as link files
Status: MODIFIED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Mohammed Rafi KC
nchilaka
tier-interops
: ZStream
Depends On:
Blocks: 1268895
  Show dependency treegraph
 
Reported: 2016-01-30 07:24 EST by RajeshReddy
Modified: 2017-06-28 05:07 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
When a readdirp call is performed on a USS (User Serviceable Snapshot) as part of a request to list the entries on a snapshot of a tiered volume, the USS provides the wrong stat for files in the cold tier. This results in incorrect permissions being applied to the mount point, and files appear to have '-----T' permissions. Workaround: FUSE clients can work around this issue by remounting the volume with any of the following mount options: use-readdirp=no (recommended) attribute-timeout=0 entry-timeout=0 NFS clients can work around the issue by remounting the volume with noac option.
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2016-01-30 07:24:08 EST
Description of problem:
================
While browsing the snapshot using USS files are getting displayed as link files  

Version-Release number of selected component (if applicable):
=====================
glusterfs-server-3.7.5-18.e

How reproducible:


Steps to Reproduce:
==============
1. Create 16x2 volume and then attach 4X2 hot tier and enable USS and quota
2. Mount it on the client using FUSE
3. While IO is going on take snapshot 
4. From the FUSE mount go to .snaps/<snap> and list the content but all files are getting displayed as link files (-----T)

Actual results:
==========
Due to this not able to read the file content 

Expected results:


Additional info:
==============
IP of the machine is : 10.70.35.153
Comment 1 Bhaskarakiran 2016-01-30 10:03:35 EST
On NFS mount, listing files in .snaps directory throws 'stale file handle' errors.
Below is the snippet of ec_tier_snap1-3f4f1ddb-8a04-4ef0-b6e1-322b016ce7b7.log 

[2016-01-30 15:06:41.875879] E [MSGID: 109034] [dht-common.c:1122:dht_lookup_unlink_of_false_linkto_cbk] 0-4469a385507e4344b9e4e4ab8992d27e-tier-dht: Could not unlink the linkto file as either fd is open and/or linkto xattr is set for /client2/renames/dir.1/newfile.349 [Device or resource busy]
[2016-01-30 15:06:41.878304] I [MSGID: 109045] [dht-common.c:2124:dht_lookup_cbk] 0-4469a385507e4344b9e4e4ab8992d27e-tier-dht: linkfile not having link subvol for /client2/renames/dir.1/newfile.349
[2016-01-30 15:06:42.110088] I [MSGID: 109069] [dht-common.c:1095:dht_lookup_unlink_of_false_linkto_cbk] 0-4469a385507e4344b9e4e4ab8992d27e-tier-dht: lookup_unlink returned with op_ret -> 0 and op-errno -> 117 for /client2/renames/dir.1/newfile.349
Comment 3 Mohammed Rafi KC 2016-02-01 05:03:44 EST
RCA:

For tiered volume , we send readdirp only to cold tier for performance improvement, since cold tier is the default hash every file or corresponding linkfile should be there in cold tier. For files in hot tier, there will be a linkfile and readdirp from cold tier for such files will not have proper attributes. In normal client stack tier xlators will set inode=null for such entries to force a lookup.

In snaps world , since there is no tier layer on client graph, the inode will be passed to fuse kernel as nodeuuid. ie it will print  permission as only T bit for files which is in cold tier.
Comment 5 Mohammed Rafi KC 2016-02-05 07:05:06 EST
During readdirp call on a tiered volume, listing entries of a snapshot volume through USS will give wrong stat for files in cold tier, which result in printing wrong permissions on mount point.

To work-around this issue, we can use either of the options during mounting.

Use mount options ,
For Fuse mount (use any of the option):
1) use-readdirp=no (recommended)
2) attribute-timeout=0
3) entry-timeout=0

For NFS:
1) noac
Comment 7 Mohammed Rafi KC 2016-02-08 00:33:37 EST
Rajesh,

Can you please verify the doc text provided.
Comment 8 Vivek Agarwal 2016-02-08 02:08:13 EST
Avra, Can you help verify the doc text?
Comment 13 Avra Sengupta 2016-02-29 08:35:24 EST
From the initial analysis, it seems that ideally the fix should be placed in libgfapi, because the same issue can happen on other components like nfs-ganesha and samba.
Comment 14 Niels de Vos 2016-03-07 01:52:56 EST
(In reply to Avra Sengupta from comment #13)
> From the initial analysis, it seems that ideally the fix should be placed in
> libgfapi, because the same issue can happen on other components like
> nfs-ganesha and samba.

Could you explain more about the envisioned fix and how it would be done in gfapi? Is it something that needs to be handled in FUSE and Gluster/NFS too?
Comment 16 Avra Sengupta 2016-03-10 04:21:12 EST
Rafi conducted the initial RCA.

Rafi, Could you please explain the fix.
Comment 17 Mohammed Rafi KC 2016-03-10 05:59:34 EST
The ideal fix would be to do a stat/lookup call from gfapi layer for entries having NULL gfid in readdirp_cbk. Then the result of  stat call need to be packed into readdirp result and send to application. This will be an intrusive fix and will have a performance impact on a tiered volume (only for tiered volume).

In case of fuse, fuse client will do a lookup for entries having a NULL nodeuuid in readdirp result. An for GNFS, we invalidate the such entries before giving the reaadirp output back to nfs client, ie client will force to do a lookup for such entries.

RCA is given in comment 3.
Comment 19 Dan Lambright 2016-06-07 09:15:43 EDT
In our bug triage it was mentioned this has been fixed and this can be closed.. Rafi can you confirm?
Comment 21 Mohammed Rafi KC 2016-07-25 05:28:10 EDT
Yes it has been fixed along with bug 1322247 .

patch details
master : http://review.gluster.org/#/c/14079/

This fix would be available in 3.2 as part of the rebase.
Comment 23 Milind Changire 2017-01-18 05:22:39 EST
Moving to MODIFIED.
Patch available downstream as commit 9423bde.

Note You need to log in before you can comment on or make changes to this bug.