Description of problem: ======================= Inconsistency occurs by combination of copying, moving and removing a directory. Version-Release number of selected component (if applicable): ============================================================= How reproducible: ================= About once in 100-3000 times. Steps to Reproduce: =================== 1. create a non-empty directory dir. 1-1. mkdir dir 1-2. touch dir/file 2. create a copy of dir, swap dir with copy and clean up. 2-1. mkdir work 2-2. cp -a dir work/copy 2-3. mv dir work/orig 2-4. mv work/copy dir 2-5. rm -rf work 3. repeat 2. ## Reproducible shell script ## #!/bin/sh # Suppose /data is on the glusterfs. SRC_DIR=/data/tmp/dir WORK_DIR=/data/tmp/work mkdir $SRC_DIR for((I=1;I<=1;I++));do touch $SRC_DIR/file$I done for((N=0;N<=100000;N++));do echo $N mkdir $WORK_DIR || break cp -a $SRC_DIR $WORK_DIR/copy || break mv $SRC_DIR $WORK_DIR/dir || break mv $WORK_DIR/copy $SRC_DIR || break rm -rf $WORK_DIR || break done ## end of script ## Actual results: =============== The script stops with rm -rf command error "rm: cannot remove `/data/tmp/work/dir': Directory not empty". But /data/tmp/work/dir is empty. And a waring is registered in /var/log/glusterfs/bricks/gluster-brick1-gv0.log W [MSGID: 113026] [posix.c:1338:posix_mkdir] 0-gv0-posix: mkdir (/tmp/dir): gfid (3cdbdb99-07c4-4add-a742-2b112c393305) isalready associated with directory (/gluster/brick1/gv0/.glusterfs/f1/d5/f1d5d265-d857-4a56-8c4d-7fa46cbd86a8/dir). Hence,both directories will share same gfid and thiscan lead to inconsistencies. Actutually Brick1 differs from Brick2. Contents of Brick1 sv1:/gluster/brick1/gv0/tmp/work/dir/ are: >file Contents of Brick2 sv2:/gluster/brick1/gv0/tmp/work/dir/ are: >copy >copy/file Expected results: ================= No error occurs. Additional info: ================ Gluster volume settings are default. ## gluster volume info output ## Volume Name: gv0 Type: Replicate Volume ID: ea926ec7-95af-4b2d-8a1e-f37951b733c5 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: sv1:/gluster/brick1/gv0 Brick2: sv2:/gluster/brick1/gv0 Options Reconfigured: features.quota-deem-statfs: on performance.readdir-ahead: on features.quota: on features.inode-quota: on
Can u attach the the brick log, client log, shd log and o/p of volume status command
Created attachment 1114278 [details] Log files and output of gluster command Hello, I'll send you log files and output of gluster command. The steps for creating log are as follows: 1. logrotate -f /etc/logrotate.d/glusterfs 2. gluster volume create gv0 replica 2 vm1:/gluster/brick1/gv0 vm2:gluser/brick1/gv0 3. gluster volume set gv0 nfs.disable yes 4. gluster volume start gv0 5. gluster volume quota gv0 enable 6. mount -t glusterfs localhost:gv0 /data 7. mkdir /data/tmp 8. gluster volume quota gv0 limit-usage /tmp 1GB 9. execute reproducible script. 10. archive log files and output of gluster commands. The archive contains: ./var/log/glusterfs/volume-status-gv0.txt #output of "gluster volume status gv0" ./var/log/glusterfs/volume-info-gv0.txt #output of "gluster volume info gv0" ./var/log/glusterfs/cli.log ./var/log/glusterfs/cmd_history.log ./var/log/glusterfs/geo-replication/ ./var/log/glusterfs/geo-replication-slaves/ ./var/log/glusterfs/geo-replication-slaves/mbr/ ./var/log/glusterfs/glustershd.log ./var/log/glusterfs/gv0-quota-crawl.log ./var/log/glusterfs/quotad.log ./var/log/glusterfs/quota-mount-gv0.log ./var/log/glusterfs/data.log ./var/log/glusterfs/snaps/ ./var/log/glusterfs/snaps/gv0/ ./var/log/glusterfs/bricks/gluster-brick1-gv0.log thanks.
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.