Bug 1212842
Summary: | tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Shruti Sampat <ssampat> | |
Component: | replicate | Assignee: | bugs <bugs> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 3.7.0 | CC: | bugs, gkuri, gluster-bugs, kdhananj, ssampat, wuyl | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.3 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1223757 1297280 (view as bug list) | Environment: | ||
Last Closed: | 2015-07-30 09:47:25 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1186580, 1223757, 1235216 |
Description
Shruti Sampat
2015-04-17 13:18:08 UTC
Tried it about 9 times and was not able to hit the issue once. Saw that your volume had quota enabled and tried the test again, this time with quota on. The bug is easily reproducible with quota enabled.The problem is unrelated to AFR. It could possibly be due to asynchronous updates to quota xattrs which happen in the background, leading to ctime modification of the file/directory while tar is in progress. Could you please verify the same and change the component to quota? (In reply to Krutika Dhananjay from comment #2) > Tried it about 9 times and was not able to hit the issue once. > > Saw that your volume had quota enabled and tried the test again, this time > with quota on. The bug is easily reproducible with quota enabled.The problem > is unrelated to AFR. It could possibly be due to asynchronous updates to > quota xattrs which happen in the background, leading to ctime modification > of the file/directory while tar is in progress. Could you please verify the > same and change the component to quota? I was able to reproduce the issue in 1x2 replicate volume even with quota turned off. Tried 10 times and was able to reproduce this issue 5 times. My volume looks like this - # gluster volume info rep Volume Name: rep Type: Replicate Volume ID: 1e93f0f2-47d6-440b-86d6-3763bf8f97fc Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.70.37.126:/rhs/brick2/b1 Brick2: 10.70.37.123:/rhs/brick2/b1 Options Reconfigured: features.uss: enable client.event-threads: 4 server.event-threads: 5 cluster.consistent-metadata: on Steps I performed - 1. Created the above volume with the volume options as seen above and started it. 2. Mounted the volume using fuse. 3. On the mount point, download the linux kernel, followed by untar and then tar-ing it again - # wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.4.tar.xz; tar xf linux-3.19.4.tar.xz; for i in {1..10}; do tar cf linux$i.tar linux-3.19.4 2> tar$i.log; done Out of 10 times, I saw this issue 5 times. The issue is seen both on files and directories. REVIEW: http://review.gluster.org/11391 (cluster/afr: Pick gfid from poststat during fresh lookup for read child calculation) posted (#1) for review on release-3.7 by Krutika Dhananjay (kdhananj) COMMIT: http://review.gluster.org/11391 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 7a99dacc0eb5c31ba2d95615f4fe787c03a311df Author: Krutika Dhananjay <kdhananj> Date: Wed Jun 24 08:02:51 2015 +0530 cluster/afr: Pick gfid from poststat during fresh lookup for read child calculation Backport of: http://review.gluster.org/11373 Change-Id: I3ddc70cb0e7dbd1ef8adb352393b5ec16464fc94 BUG: 1212842 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/11391 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu> I'm running a snapshot of glusterfs which incorporates this patch, but the problem still appears to be there, here's a snippet of tar output ... <snip> ./dev-haskell/wai-app-static/ ./dev-haskell/wai-app-static/wai-app-static-3.0.0.3.ebuild ./dev-haskell/wai-app-static/wai-app-static-3.0.1.ebuild ./dev-haskell/wai-app-static/wai-app-static-2.0.0.1.ebuild ./dev-haskell/wai-app-static/wai-app-static-3.0.0.5.ebuild ./dev-haskell/wai-app-static/metadata.xml ./dev-haskell/wai-app-static/wai-app-static-1.3.2.1.ebuild ./dev-haskell/wai-app-static/wai-app-static-2.0.0.3.ebuild ./dev-haskell/wai-app-static/Manifest ./dev-haskell/wai-app-static/ChangeLog ./dev-haskell/wai-app-static/wai-app-static-3.0.0.ebuild tar: ./dev-haskell/wai-app-static: file changed as we read it ./dev-haskell/tensor/ ./dev-haskell/tensor/tensor-1.0.0.1.ebuild ./dev-haskell/tensor/files/ ./dev-haskell/tensor/files/tensor-1.0.0.1-ghc-7.8.patch ./dev-haskell/tensor/metadata.xml ./dev-haskell/tensor/Manifest ./dev-haskell/tensor/ChangeLog ./dev-haskell/yesod-core/ ./dev-haskell/yesod-core/yesod-core-1.2.7.ebuild ./dev-haskell/yesod-core/yesod-core-1.4.9.1.ebuild ./dev-haskell/yesod-core/yesod-core-1.4.7.1.ebuild ./dev-haskell/yesod-core/metadata.xml ./dev-haskell/yesod-core/Manifest ./dev-haskell/yesod-core/ChangeLog ./dev-haskell/tagstream-cond <snip> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |