Bug 1709301 - ctime changes: tar still complains file changed as we read it if uss is enabled
Summary: ctime changes: tar still complains file changed as we read it if uss is enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Kotresh HR
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks: 1696809 1720290 1721783
TreeView+ depends on / blocked
 
Reported: 2019-05-13 11:16 UTC by Nag Pavan Chilakam
Modified: 2023-09-14 05:28 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1720290 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:21:27 UTC
Embargoed:
khiremat: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:21:46 UTC

Description Nag Pavan Chilakam 2019-05-13 11:16:57 UTC
Description of problem:
==================
as part of verifying 1298724 - After an update glusterfs to version 3.1 (glusterfs version glusterfs-3.7.1-11.el7rhgs.x86_64), customer still encounter tar: <fileName>: file changed as we read it
when compressing a directory of files into a tar ball, I still see tar complaining "file changed as we read it"


also refer to https://bugzilla.redhat.com/show_bug.cgi?id=1298724#c95

As per https://bugzilla.redhat.com/show_bug.cgi?id=1298724#c96
this seems to be due to the fact that I have enabled uss


test version:
6.0.2


How reproducible:
==================
always on my testbed

Steps to Reproduce:
===============
1.created a 4x3 volume 
2. enabled uss and quotas
3. mounted volume on 4 clients and started to untar kernel image and again tarball the image.

When tarballing the files back, i see the above issue consistently




Volume Name: nfnas
Type: Distributed-Replicate
Volume ID: 61b5239a-e275-4a1a-b02e-65625c4dc3fd
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
Brick2: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
Brick3: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
Brick4: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick2/nfnas
Brick5: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick2/nfnas
Brick6: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
Brick7: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick3/nfnas
Brick8: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick3/nfnas
Brick9: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick3/nfnas
Brick10: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick4/nfnas
Brick11: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick4/nfnas
Brick12: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick4/nfnas
Options Reconfigured:
diagnostics.client-log-level: DEBUG
performance.stat-prefetch: on
features.uss: disable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Comment 3 Nag Pavan Chilakam 2019-05-13 11:19:08 UTC
(In reply to nchilaka from comment #0)
> Description of problem:
> ==================
> as part of verifying 1298724 - After an update glusterfs to version 3.1
> (glusterfs version glusterfs-3.7.1-11.el7rhgs.x86_64), customer still
> encounter tar: <fileName>: file changed as we read it
> when compressing a directory of files into a tar ball, I still see tar
> complaining "file changed as we read it"
> 
> 
> also refer to https://bugzilla.redhat.com/show_bug.cgi?id=1298724#c95
> 
> As per https://bugzilla.redhat.com/show_bug.cgi?id=1298724#c96
> this seems to be due to the fact that I have enabled uss
> 
> 
> test version:
> 6.0.2
> 
> 
> How reproducible:
> ==================
> always on my testbed
> 
> Steps to Reproduce:
> ===============
> 1.created a 4x3 volume 
> 2. enabled uss and quotas
> 3. mounted volume on 4 clients and started to untar kernel image and again
> tarball the image.
> 
> When tarballing the files back, i see the above issue consistently
> 
> 
> 
> 
> Volume Name: nfnas
> Type: Distributed-Replicate
> Volume ID: 61b5239a-e275-4a1a-b02e-65625c4dc3fd
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4 x 3 = 12
> Transport-type: tcp
> Bricks:
> Brick1: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
> Brick2: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
> Brick3: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
> Brick4: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick2/nfnas
> Brick5: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick2/nfnas
> Brick6: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick1/nfnas
> Brick7: rhs-gp-srv9.lab.eng.blr.redhat.com:/gluster/brick3/nfnas
> Brick8: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick3/nfnas
> Brick9: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick3/nfnas
> Brick10: rhs-gp-srv10.lab.eng.blr.redhat.com:/gluster/brick4/nfnas
> Brick11: rhs-gp-srv7.lab.eng.blr.redhat.com:/gluster/brick4/nfnas
> Brick12: rhs-gp-srv8.lab.eng.blr.redhat.com:/gluster/brick4/nfnas
> Options Reconfigured:
> diagnostics.client-log-level: DEBUG
> performance.stat-prefetch: on
> features.uss: disable
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off


Note that features.uss was disabled for debugging the issue , so was with toggling performance.stat-prefetch.
the issue was hit consistently with features.uss enabled

Comment 6 Raghavendra Bhat 2019-05-15 18:45:37 UTC
The reason for incrementing the nano second of the ctime is to ensure smoother handling of NFS clients. Like mentioned in the comment above, unless NFS client updates its cache due to fresh lookup, enabling uss will not allow NFS clients to see .snaps entry from a directory (depending upon whether that directory's entries have been already cached by NFS client or not). This is because, the appearance of .snaps in a directory is related to enabling of uss. So, NFS client would not know that uss has been enabled. Incrementing ctime of the directory makes NFS client forget its cache of entries inside the directory.

But, incrementing it is causing this issue with ctime feature.

One of the things that I can think of is this.

1) In snapview-client for stat, check whether the request is coming from NFS or not (I think frame->root->pid would be set to NFS_PID for requests coming from NFS. This used to be the case for gnfs. Not sure if it is the case with NFS Ganesha or not)

2) Inside every's directory's inode context, make snapview-client store a flag as to whether it has incremented the ctime or not. So, this ctime increment would happen only the first time when uss is enabled. Then the flag would be set. And further stat requests on directory would not make ctime increment.

This still has a narrow window of tar operations failing on NFS client when run right after enabling uss (or ctime check happening on a directory for the first time after enabling uss). But I think this can be treated as a situation where there are multiple NFS clients which have cached the entries of a directory and then from one of the clients, a new entry is created. 

OR 

We need to find another way to tell NFS client that the directory's contents have changed (without touching ctime attribute)

Need more thoughts on this.

Comment 9 Jiffin 2019-06-04 09:24:05 UTC
(In reply to Raghavendra Bhat from comment #6)
> The reason for incrementing the nano second of the ctime is to ensure
> smoother handling of NFS clients. Like mentioned in the comment above,
> unless NFS client updates its cache due to fresh lookup, enabling uss will
> not allow NFS clients to see .snaps entry from a directory (depending upon
> whether that directory's entries have been already cached by NFS client or
> not). This is because, the appearance of .snaps in a directory is related to
> enabling of uss. So, NFS client would not know that uss has been enabled.
> Incrementing ctime of the directory makes NFS client forget its cache of
> entries inside the directory.

I am trying to understand the issue, for root (or another directory), if a stat is performed then buf->ia_ctime_nsec is incremented to invalidate the client side cache
so that client will perform lookup again on that directory. Or it is performed .snaps directory.

> 
> But, incrementing it is causing this issue with ctime feature.
> 
> One of the things that I can think of is this.
> 
> 1) In snapview-client for stat, check whether the request is coming from NFS
> or not (I think frame->root->pid would be set to NFS_PID for requests coming
> from NFS. This used to be the case for gnfs. Not sure if it is the case with
> NFS Ganesha or not)
> 

NFS_PID is not set for nfs-ganesha.

> 2) Inside every's directory's inode context, make snapview-client store a
> flag as to whether it has incremented the ctime or not. So, this ctime
> increment would happen only the first time when uss is enabled. Then the
> flag would be set. And further stat requests on directory would not make
> ctime increment.
> 
> This still has a narrow window of tar operations failing on NFS client when
> run right after enabling uss (or ctime check happening on a directory for
> the first time after enabling uss). But I think this can be treated as a
> situation where there are multiple NFS clients which have cached the entries
> of a directory and then from one of the clients, a new entry is created. 
> 
> OR 
> 
> We need to find another way to tell NFS client that the directory's contents
> have changed (without touching ctime attribute)
> 
> Need more thoughts on this.

Instead of incrementing ctime attribute, IMO we can set ent->name_attributes.attributes_follow = FALSE, 
please refer this https://review.gluster.org/#/c/glusterfs/+/12989/4/xlators/nfs/server/src/nfs3-helpers.c
inwhich similar change was made for 'T' files.

Comment 26 errata-xmlrpc 2019-10-30 12:21:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249

Comment 27 Red Hat Bugzilla 2023-09-14 05:28:26 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.