Bug 1296825 - Inconsistency occurs by combination of copying, moving and removing a directory.
Inconsistency occurs by combination of copying, moving and removing a directory.
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Pranith Kumar K
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2016-01-08 03:45 EST by comboy999jdl
Modified: 2017-03-08 05:49 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-03-08 05:49:39 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Log files and output of gluster command (195.13 KB, application/x-gzip)
2016-01-13 00:48 EST, comboy999jdl
no flags Details

  None (edit)
Description comboy999jdl 2016-01-08 03:45:04 EST
Description of problem:
Inconsistency occurs by combination of copying, moving and removing a directory.

Version-Release number of selected component (if applicable):

How reproducible:
About once in 100-3000 times.

Steps to Reproduce:
1. create a non-empty directory dir.
  1-1. mkdir dir
  1-2. touch dir/file
2. create a copy of dir, swap dir with copy and clean up.
  2-1. mkdir work
  2-2. cp -a dir work/copy
  2-3. mv dir work/orig
  2-4. mv work/copy dir
  2-5. rm -rf work
3. repeat 2.

## Reproducible shell script ##
# Suppose /data is on the glusterfs.
mkdir $SRC_DIR
  touch $SRC_DIR/file$I

  echo $N
  mkdir $WORK_DIR || break
  cp -a $SRC_DIR $WORK_DIR/copy || break
  mv $SRC_DIR $WORK_DIR/dir || break
  mv $WORK_DIR/copy $SRC_DIR || break
  rm -rf $WORK_DIR || break
## end of script ##

Actual results:
The script stops with rm -rf command error "rm: cannot remove 
`/data/tmp/work/dir': Directory not empty".
But /data/tmp/work/dir is empty.

And a waring is registered in 
W [MSGID: 113026] [posix.c:1338:posix_mkdir] 0-gv0-posix: mkdir (/tmp/dir): 
gfid (3cdbdb99-07c4-4add-a742-2b112c393305) isalready associated with 
Hence,both directories will share same gfid and thiscan lead to 

Actutually Brick1 differs from Brick2.

Contents of Brick1 sv1:/gluster/brick1/gv0/tmp/work/dir/ are:

Contents of Brick2 sv2:/gluster/brick1/gv0/tmp/work/dir/ are:

Expected results:
No error occurs.

Additional info:
Gluster volume settings are default.
## gluster volume info output ##
Volume Name: gv0
Type: Replicate
Volume ID: ea926ec7-95af-4b2d-8a1e-f37951b733c5
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: sv1:/gluster/brick1/gv0
Brick2: sv2:/gluster/brick1/gv0
Options Reconfigured:
features.quota-deem-statfs: on
performance.readdir-ahead: on
features.quota: on
features.inode-quota: on
Comment 1 Jiffin 2016-01-12 07:15:45 EST
Can u attach the the brick log, client log, shd log and o/p of volume status command
Comment 2 comboy999jdl 2016-01-13 00:48 EST
Created attachment 1114278 [details]
Log files and output of gluster command

Hello, I'll send you log files and output of gluster command.

The steps for creating log are as follows:
1. logrotate -f /etc/logrotate.d/glusterfs
2. gluster volume create gv0 replica 2 vm1:/gluster/brick1/gv0 vm2:gluser/brick1/gv0
3. gluster volume set gv0 nfs.disable yes
4. gluster volume start gv0
5. gluster volume quota gv0 enable
6. mount -t glusterfs localhost:gv0 /data
7. mkdir /data/tmp
8. gluster volume quota gv0 limit-usage /tmp 1GB
9. execute reproducible script.
10. archive log files and output of gluster commands. 

The archive contains:
./var/log/glusterfs/volume-status-gv0.txt  #output of "gluster volume status gv0"
./var/log/glusterfs/volume-info-gv0.txt    #output of "gluster volume info gv0"

Comment 3 Kaushal 2017-03-08 05:49:39 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.