Bug 1333292
| Summary: | tar fails with file changed as we read it for directory | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> |
| Component: | distribute | Assignee: | bugs <bugs> |
| Status: | CLOSED UPSTREAM | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | mainline | CC: | bugs, khiremat, rkavunga, spalai |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-03-12 13:01:25 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
As gNFS is no longer supported, I tried to reproduce it on a fuse mount with the latest master:
#!/bin/bash
. $(dirname $0)/../../include.rc
. $(dirname $0)/../../volume.rc
. $(dirname $0)/../../nfs.rc
TESTS_EXPECTED_IN_LOOP=10
cleanup;
#Basic checks
TEST glusterd
TEST pidof glusterd
#Create a distributed-replicate volume
TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{1..6};
TEST $CLI volume set $V0 cluster.consistent-metadata on
#TEST $CLI volume set $V0 cluster.post-op-delay-secs 0
#TEST $CLI volume set $V0 nfs.rdirplus off
TEST $CLI volume set $V0 performance.force-readdirp off
TEST $CLI volume set $V0 dht.force-readdirp off
TEST $CLI volume start $V0
TEST kill_brick $V0 $H0 $B0/${V0}1
TEST kill_brick $V0 $H0 $B0/${V0}3
TEST kill_brick $V0 $H0 $B0/${V0}5
#EXPECT_WITHIN $NFS_EXPORT_TIMEOUT "1" is_nfs_export_available;
# Mount NFS
#mount_nfs $H0:/$V0 $N0 vers=3
TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 --use-readdirp=no $M0;
#Create files
TEST mkdir -p $M0/nfs/dir1/dir2
for i in {1..10}; do
TEST_IN_LOOP dd if=/dev/urandom of=$M0/nfs/dir1/dir2/file$i bs=1024k count=1
done
TEST tar cf /tmp/dir1.tar.gz $M0/nfs/dir1
TEST rm -f /tmp/dir1.tar.gz
#EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $N0
cleanup;
I have managed to hit the issue but on Fuse. Going forward, we will be debugging this for Fuse mounts.
Is it fixed with ctime feature, which is available since glusterfs-6.x releases? Pranith, looks like a replicate issue. If so please change the component. (In reply to Susant Kumar Palai from comment #4) > Pranith, looks like a replicate issue. If so please change the component. No I raised it for distribute. This bug is moved to https://github.com/gluster/glusterfs/issues/987, and will be tracked there from now on. Visit GitHub issues URL for further details The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |
Description of problem: Run the following test on centos machine and you will see the failure: #!/bin/bash . $(dirname $0)/../../include.rc . $(dirname $0)/../../volume.rc . $(dirname $0)/../../nfs.rc TESTS_EXPECTED_IN_LOOP=10 cleanup; #Basic checks TEST glusterd TEST pidof glusterd #Create a distributed-replicate volume TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{1..6}; TEST $CLI volume set $V0 cluster.consistent-metadata on #TEST $CLI volume set $V0 cluster.post-op-delay-secs 0 TEST $CLI volume set $V0 nfs.rdirplus off TEST $CLI volume start $V0 TEST kill_brick $V0 $H0 $B0/${V0}1 TEST kill_brick $V0 $H0 $B0/${V0}3 TEST kill_brick $V0 $H0 $B0/${V0}5 EXPECT_WITHIN $NFS_EXPORT_TIMEOUT "1" is_nfs_export_available; # Mount NFS mount_nfs $H0:/$V0 $N0 vers=3 #Create files TEST mkdir -p $N0/nfs/dir1/dir2 for i in {1..10}; do TEST_IN_LOOP dd if=/dev/urandom of=$N0/nfs/dir1/dir2/file$i bs=1024k count=1 done TEST tar cf /tmp/dir1.tar.gz $N0/nfs/dir1 TEST rm -f /tmp/dir1.tar.gz EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $N0 cleanup; Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: