Bug 1212842 - tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
Summary: tar on a glusterfs mount displays "file changed as we read it" even though th...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: qe_tracker_everglades 1223757 1235216
TreeView+ depends on / blocked
 
Reported: 2015-04-17 13:18 UTC by Shruti Sampat
Modified: 2016-01-11 05:09 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.3
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1223757 1297280 (view as bug list)
Environment:
Last Closed: 2015-07-30 09:47:25 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Shruti Sampat 2015-04-17 13:18:08 UTC
Description of problem:
-----------------------

tar command on the glusterfs mount point is seen to report the following error on some files -

tar: ./linux-3.19.4/tools/perf/scripts/perl: file changed as we read it

There were no changes made to files while the tar command was in progress. Reproducible on 1x2 volume. Unable to reproduce on a pure distribute volume.

See below for volume information -

# gluster v i test
 
Volume Name: test
Type: Replicate
Volume ID: f844808b-0d2e-40ce-9331-eddfb2b72855
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vm5-rhsqa13:/rhs/brick5/b1
Brick2: vm6-rhsqa13:/rhs/brick5/b1
Options Reconfigured:
cluster.consistent-metadata: on
features.quota: on
features.uss: enable
client.event-threads: 4
server.event-threads: 5

Version-Release number of selected component (if applicable):
--------------------------------------------------------------

glusterfs-3.7dev-0.965.git2788ddd.el6.x86_64

How reproducible:
------------------

Tried a couple of times, on 6x3 and 1x2 volumes. Successfully reproducible.


Steps to Reproduce:
--------------------

1. Create replicate or distributed-replicate volume and set the option `cluster.consistent-metadata' to on. Start the volume.
2. Fuse mount the volume.
3. Download linux kernel and untar it on the mount point.
4. Compile the kernel and try to create a tar.

Actual results:
----------------

While the tar command is running, the above described messages are seen for many files.

Expected results:
------------------

tar is expected to run successfully.

Additional info:

Comment 2 Krutika Dhananjay 2015-04-20 09:44:11 UTC
Tried it about 9 times and was not able to hit the issue once.

Saw that your volume had quota enabled and tried the test again, this time with quota on. The bug is easily reproducible with quota enabled.The problem is unrelated to AFR. It could possibly be due to asynchronous updates to quota xattrs which happen in the background, leading to ctime modification of the file/directory while tar is in progress. Could you please verify the same and change the component to quota?

Comment 3 Shruti Sampat 2015-04-21 16:48:12 UTC
(In reply to Krutika Dhananjay from comment #2)
> Tried it about 9 times and was not able to hit the issue once.
> 
> Saw that your volume had quota enabled and tried the test again, this time
> with quota on. The bug is easily reproducible with quota enabled.The problem
> is unrelated to AFR. It could possibly be due to asynchronous updates to
> quota xattrs which happen in the background, leading to ctime modification
> of the file/directory while tar is in progress. Could you please verify the
> same and change the component to quota?

I was able to reproduce the issue in 1x2 replicate volume even with quota turned off. Tried 10 times and was able to reproduce this issue 5 times. My volume looks like this -

# gluster volume info rep
 
Volume Name: rep
Type: Replicate
Volume ID: 1e93f0f2-47d6-440b-86d6-3763bf8f97fc
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.126:/rhs/brick2/b1
Brick2: 10.70.37.123:/rhs/brick2/b1
Options Reconfigured:
features.uss: enable
client.event-threads: 4
server.event-threads: 5
cluster.consistent-metadata: on

Steps I performed -

1. Created the above volume with the volume options as seen above and started it.

2. Mounted the volume using fuse.

3. On the mount point, download the linux kernel, followed by untar and then tar-ing it again -

# wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.4.tar.xz; tar xf linux-3.19.4.tar.xz; for i in {1..10}; do tar cf linux$i.tar linux-3.19.4 2> tar$i.log; done

Out of 10 times, I saw this issue 5 times.

The issue is seen both on files and directories.

Comment 4 Anand Avati 2015-06-25 02:34:38 UTC
REVIEW: http://review.gluster.org/11391 (cluster/afr: Pick gfid from poststat during fresh lookup for read child calculation) posted (#1) for review on release-3.7 by Krutika Dhananjay (kdhananj)

Comment 5 Anand Avati 2015-06-26 02:36:43 UTC
COMMIT: http://review.gluster.org/11391 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit 7a99dacc0eb5c31ba2d95615f4fe787c03a311df
Author: Krutika Dhananjay <kdhananj>
Date:   Wed Jun 24 08:02:51 2015 +0530

    cluster/afr: Pick gfid from poststat during fresh lookup for read child calculation
    
            Backport of: http://review.gluster.org/11373
    
    Change-Id: I3ddc70cb0e7dbd1ef8adb352393b5ec16464fc94
    BUG: 1212842
    Signed-off-by: Krutika Dhananjay <kdhananj>
    Reviewed-on: http://review.gluster.org/11391
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 6 G Kuri 2015-07-21 15:27:21 UTC
I'm running a snapshot of glusterfs which incorporates this patch, but the problem still appears to be there, here's a snippet of tar output ...

<snip>

./dev-haskell/wai-app-static/
./dev-haskell/wai-app-static/wai-app-static-3.0.0.3.ebuild
./dev-haskell/wai-app-static/wai-app-static-3.0.1.ebuild
./dev-haskell/wai-app-static/wai-app-static-2.0.0.1.ebuild
./dev-haskell/wai-app-static/wai-app-static-3.0.0.5.ebuild
./dev-haskell/wai-app-static/metadata.xml
./dev-haskell/wai-app-static/wai-app-static-1.3.2.1.ebuild
./dev-haskell/wai-app-static/wai-app-static-2.0.0.3.ebuild
./dev-haskell/wai-app-static/Manifest
./dev-haskell/wai-app-static/ChangeLog
./dev-haskell/wai-app-static/wai-app-static-3.0.0.ebuild
tar: ./dev-haskell/wai-app-static: file changed as we read it
./dev-haskell/tensor/
./dev-haskell/tensor/tensor-1.0.0.1.ebuild
./dev-haskell/tensor/files/
./dev-haskell/tensor/files/tensor-1.0.0.1-ghc-7.8.patch
./dev-haskell/tensor/metadata.xml
./dev-haskell/tensor/Manifest
./dev-haskell/tensor/ChangeLog
./dev-haskell/yesod-core/
./dev-haskell/yesod-core/yesod-core-1.2.7.ebuild
./dev-haskell/yesod-core/yesod-core-1.4.9.1.ebuild
./dev-haskell/yesod-core/yesod-core-1.4.7.1.ebuild
./dev-haskell/yesod-core/metadata.xml
./dev-haskell/yesod-core/Manifest
./dev-haskell/yesod-core/ChangeLog
./dev-haskell/tagstream-cond

<snip>

Comment 7 Kaushal 2015-07-30 09:47:25 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.