Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1232230 - [geo-rep]: Directory renames are not captured in changelog hence it doesn't sync to the slave and glusterfind output
[geo-rep]: Directory renames are not captured in changelog hence it doesn't s...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfind (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity medium
: ---
: RHGS 3.1.0
Assigned To: Milind Changire
Sweta Anandpara
: Regression
Depends On:
Blocks: 1202842 1223636
  Show dependency treegraph
 
Reported: 2015-06-16 06:30 EDT by Sweta Anandpara
Modified: 2016-09-17 11:20 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-07-29 01:04:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Sweta Anandpara 2015-06-16 06:30:14 EDT
Description of problem:
When a directory is created, it gets recorded as a NEW entry. When the directory is moved from its present location to a different location (with the same name), it does not get recorded in the output file. Nor does it get recorded when the directory is renamed (in the same location)


Version-Release number of selected component (if applicable):
glusterfs-3.7.1-3.el6rhs.x86_64

How reproducible: Always


Steps to Reproduce:
1. Have a 2node cluster, with a 2*2 dist rep volume 'pluto'
2. Create glusterfind sessions 'sessp1' and 'sessp2' 
3. Create the following files and directories at the mountpoint:
test1
test2
dir1
dir1/dir2
dir1/dir2/a
4. Execute glusterfind pre and post and verify that all the 5 entries have a mention in the outfiel as NEW 
5. Move the directory dir1/dir2 to mountpoint and execute glusterfind pre and post:
mv dir1/dir2  <mountpoint>
6. Nothing gets recorded in the output file
7. Rename the directory 'dir2' to 'newdir2' and again execute glusterfind pre and post
8. There is no mention of 'dir2' or 'newdir2'


Actual results:
'mv' on a directory does not get the result in the outfile as expected

Expected results:

Directory movement with same/different name, across same/different location should get recorded in the output file


Additional info:

[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v create pluto replica 2 10.70.43.93:/rhs/thinbrick1/pluto 10.70.43.155:/rhs/thinbrick1/pluto 10.70.43.93:/rhs/thinbrick2/pluto 10.70.43.155:/rhs/thinbrick2/pluto
volume create: pluto: success: please start the volume to access data
[root@dhcp43-93 ~]# gluster v info pluto
 
Volume Name: pluto
Type: Distributed-Replicate
Volume ID: d17afc82-f5e4-44ac-816b-5c5705879bda
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/pluto
Brick2: 10.70.43.155:/rhs/thinbrick1/pluto
Brick3: 10.70.43.93:/rhs/thinbrick2/pluto
Brick4: 10.70.43.155:/rhs/thinbrick2/pluto
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start pluto
gluster volume start: pluto: success
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sessn2                    nash                      2015-06-16 20:17:24      
sessn3                    nash                      2015-06-16 17:47:02      
sesso1                    ozone                     2015-06-15 23:48:42      
sessn1                    nash                      2015-06-16 18:02:11      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sessp1 pluto
Session sessp1 created with volume pluto
[root@dhcp43-93 ~]# glusterfind create sessp2 pluto
Session sessp2 created with volume pluto
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sessn2                    nash                      2015-06-16 20:17:24      
sessn3                    nash                      2015-06-16 17:47:02      
sessp1                    pluto                     2015-06-16 21:12:45      
sesso1                    ozone                     2015-06-15 23:48:42      
sessn1                    nash                      2015-06-16 18:02:11      
sessp2                    pluto                     2015-06-16 21:12:53      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cd /var/lib/glusterd/glusterfind/
.keys/  sessn1/ sessn2/ sessn3/ sesso1/ sesso2/ sesso3/ sessp1/ sessp2/ sessv1/ 
[root@dhcp43-93 ~]# cd /var/lib/glusterd/glusterfind/sessp1/pluto/
%2Frhs%2Fthinbrick1%2Fpluto.status  sessp1_pluto_secret.pem             status
%2Frhs%2Fthinbrick2%2Fpluto.status  sessp1_pluto_secret.pem.pub         
[root@dhcp43-93 ~]# cd /var/lib/glusterd/glusterfind/sessp1/pluto/
%2Frhs%2Fthinbrick1%2Fpluto.status  sessp1_pluto_secret.pem             status
%2Frhs%2Fthinbrick2%2Fpluto.status  sessp1_pluto_secret.pem.pub         
[root@dhcp43-93 ~]# cd /var/lib/glusterd/glusterfind/sessp1/pluto/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# date
Tue Jun 16 21:14:12 IST 2015
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre
usage: glusterfind pre [-h] [--debug] [--full] [--disable-partial]
                       [--output-prefix OUTPUT_PREFIX] [--regenerate-outfile]
                       [-N]
                       session volume outfile
glusterfind pre: error: too few arguments
[root@dhcp43-93 ~]# glusterfind pre sessp1 pluto /tmp/outp.txt
Generated output file /tmp/outp.txt
[root@dhcp43-93 ~]# cat /tmp/outp.txt 
NEW test1 
NEW test2 
NEW dir1 
NEW dir1%2F%2Fdir2 
NEW dir1%2Fdir2%2F%2Fa 
[root@dhcp43-93 ~]# glusterfind post sessp1 pluto
Session sessp1 with volume pluto updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sessp1 pluto /tmp/outp.txt
Generated output file /tmp/outp.txt
[root@dhcp43-93 ~]# cat /tmp/outp.txt 
RENAME test1 dir1%2Fdir2%2F%2Ftest1
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post sessp1 pluto
Session sessp1 with volume pluto updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sessp1 pluto /tmp/outp.txt
Generated output file /tmp/outp.txt
[root@dhcp43-93 ~]# cat /tmp/outp.txt 
[root@dhcp43-93 ~]# date
Tue Jun 16 21:16:50 IST 2015
[root@dhcp43-93 ~]# date
Tue Jun 16 21:17:55 IST 2015
[root@dhcp43-93 ~]# glusterfind pre sessp1 pluto /tmp/outp.txt --regenerate-outfile
Generated output file /tmp/outp.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cat /tmp/outp.txt 
[root@dhcp43-93 ~]# date
Tue Jun 16 21:19:11 IST 2015
[root@dhcp43-93 ~]# date
Tue Jun 16 21:23:49 IST 2015
[root@dhcp43-93 ~]# glusterfind pre sessp1 pluto /tmp/outp.txt --regenerate-outfile
Generated output file /tmp/outp.txt
[root@dhcp43-93 ~]# cat /tmp/outp.txt 
[root@dhcp43-93 ~]# # after renaming the moved directory
[root@dhcp43-93 ~]# date
Tue Jun 16 21:24:42 IST 2015
[root@dhcp43-93 ~]# glusterfind pre sessp1 pluto /tmp/outp.txt --regenerate-outfile
Generated output file /tmp/outp.txt
[root@dhcp43-93 ~]# cat /tmp/outp.txt 
[root@dhcp43-93 ~]# rpm -qa | grep glusterfs
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
[root@dhcp43-93 ~]#
Comment 2 Sweta Anandpara 2015-06-16 06:41:57 EDT
Client logs:

[root@dhcp43-71 ~]# mkdir /mnt/pp
[root@dhcp43-71 ~]# mount -t nfs 10.70.43.155:/pluto /mnt/pp
[root@dhcp43-71 ~]# cd /mnt/pp
[root@dhcp43-71 pp]# ls
[root@dhcp43-71 pp]# ls -a
.  ..  .trashcan
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# echo "whatever" > test1
[root@dhcp43-71 pp]# echo "hello world" > test2
[root@dhcp43-71 pp]# mkdir dir1
[root@dhcp43-71 pp]# mkdir dir1/dir2
[root@dhcp43-71 pp]# touch dir1/dir2/a
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# ls -a
.  ..  dir1  test1  test2  .trashcan
[root@dhcp43-71 pp]# ls -lrt
total 2
-rw-r--r--. 1 root root  9 Jun 16 21:13 test1
-rw-r--r--. 1 root root 12 Jun 16 21:13 test2
drwxr-xr-x. 3 root root 34 Jun 16 21:13 dir1
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# mv test1 dir1/dir2/
[root@dhcp43-71 pp]# ls -a
.  ..  dir1  test2  .trashcan
[root@dhcp43-71 pp]# mv dir1/dir2/ .
[root@dhcp43-71 pp]# ls -a
.  ..  dir1  dir2  test2  .trashcan
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# ls dir1
[root@dhcp43-71 pp]# ls dir2
a  test1
[root@dhcp43-71 pp]# ls -a
.  ..  dir1  dir2  test2  .trashcan
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# mv dir2 newdir2
[root@dhcp43-71 pp]# 
[root@dhcp43-71 pp]# ls -a
.  ..  dir1  newdir2  test2  .trashcan
[root@dhcp43-71 pp]#
Comment 3 Sweta Anandpara 2015-06-19 08:22:35 EDT
Hit this issue again in 3.7.1-4 build where directory rename is not getting captured in the output file. This was done after adding a brick pair in my 3*2 dist-rep volume (to make it 4*2) . Pasted below is the output before running the rebalance command, as well as after running the rebalance command.

[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v list
gluster_shared_storage
ozone
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v replace-brick
Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
[root@dhcp43-191 ~]# gluster v replace-brick ozone 10.70.42.147:/rhs/thinbrick1/ozone 10.70.42.147:/rhs/thinbrick2/ozone
Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
[root@dhcp43-191 ~]# gluster v replace-brick ozone 10.70.42.147:/rhs/thinbrick1/ozone 10.70.42.147:/rhs/thinbrick2/ozone ^C
[root@dhcp43-191 ~]# gluster v add-brick
Usage: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force]
[root@dhcp43-191 ~]# gluster v add-brick ozone replica 2 10.70.42.30:/rhs/thinbrick2/ozone 10.70.42.147:/rhs/thinbrick2/ozone
volume add-brick: success
[root@dhcp43-191 ~]# gluster v status ozone
Status of volume: ozone
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.191:/rhs/thinbrick1/ozone    49153     0          Y       6807 
Brick 10.70.42.202:/rhs/thinbrick1/ozone    49153     0          Y       31482
Brick 10.70.43.191:/rhs/thinbrick2/ozone    49154     0          Y       6815 
Brick 10.70.42.202:/rhs/thinbrick2/ozone    49154     0          Y       31489
Brick 10.70.42.30:/rhs/thinbrick1/ozone     49153     0          Y       1999 
Brick 10.70.42.147:/rhs/thinbrick1/ozone    49152     0          Y       31451
Brick 10.70.42.30:/rhs/thinbrick2/ozone     49155     0          Y       16818
Brick 10.70.42.147:/rhs/thinbrick2/ozone    49154     0          Y       23586
NFS Server on localhost                     2049      0          Y       2821 
Self-heal Daemon on localhost               N/A       N/A        Y       2829 
NFS Server on 10.70.42.30                   2049      0          Y       16840
Self-heal Daemon on 10.70.42.30             N/A       N/A        Y       16848
NFS Server on 10.70.42.147                  2049      0          Y       23608
Self-heal Daemon on 10.70.42.147            N/A       N/A        Y       23616
NFS Server on 10.70.42.202                  2049      0          Y       24979
Self-heal Daemon on 10.70.42.202            N/A       N/A        Y       24987
 
Task Status of Volume ozone
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 9ef1ace8-505d-4d97-aa23-4296aa685f76
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.43.191:/rhs/thinbrick2/ozone
Brick4: 10.70.42.202:/rhs/thinbrick2/ozone
Brick5: 10.70.42.30:/rhs/thinbrick1/ozone
Brick6: 10.70.42.147:/rhs/thinbrick1/ozone
Brick7: 10.70.42.30:/rhs/thinbrick2/ozone
Brick8: 10.70.42.147:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
storage.build-pgfid: on
changelog.changelog: on
changelog.capture-del-path: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 22:37:28      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt 
10.70.42.30 - pre failed: [2015-06-19 17:43:06.063740] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 17:43:06.063989] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 4
[2015-06-19 17:43:06.064059] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
[2015-06-19 17:43:06.064264] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo1.txt
[root@dhcp43-191 ~]# cat /tmp/outo1.txt 
MODIFY newdir1%2Fdir2%2Fa               >>>(missing) RENAME dir1 newdir1 >>>>> 
MODIFY level01%2F 
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt --regenerate-outfile
10.70.42.30 - pre failed: [2015-06-19 17:43:47.387379] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 17:43:47.387432] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-06-19 17:43:47.387921] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
[2015-06-19 17:43:47.388056] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 4
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-19 17:43:47.543465] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2015-06-19 17:43:47.543406] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-06-19 17:43:47.543953] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 4
[2015-06-19 17:43:47.544401] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo1.txt
[root@dhcp43-191 ~]# cat /tmp/outo1.txt 
MODIFY newdir1%2Fdir2%2Fa 
MODIFY level01%2F 
[root@dhcp43-191 ~]# gluster v rebalance
Usage: volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}
[root@dhcp43-191 ~]# gluster v rebalance ozone start
volume rebalance: ozone: success: Rebalance on ozone has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: b8b6bd71-29db-495f-b1bd-b31d21f8673d

[root@dhcp43-191 ~]# gluster v rebalance ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              102       226.9KB           357             0             2          in progress               7.00
                            10.70.42.202                0        0Bytes             0             0             0          in progress               7.00
                             10.70.42.30               39        48.6KB           238             0             0          in progress               7.00
                            10.70.42.147                0        0Bytes             0             0             0          in progress               7.00
volume rebalance: ozone: success: 
[root@dhcp43-191 ~]# gluster v rebalance ozone status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              280         2.6MB          1046             1            15            completed              29.00
                            10.70.42.202                0        0Bytes             0             0             0            completed              27.00
                             10.70.42.30              114       742.4KB           561             0             0            completed              27.00
                            10.70.42.147                0        0Bytes             0             0             0            completed              27.00
volume rebalance: ozone: success: 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt --regenerate-outfile
10.70.42.30 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo1.txt
[root@dhcp43-191 ~]# cat /tmp/outo1.txt 
MODIFY newdir1%2Fdir2%2Fa 
MODIFY level01%2F 
[root@dhcp43-191 ~]# date
Fri Jun 19 23:19:48 IST 2015
[root@dhcp43-191 ~]# date
Fri Jun 19 23:20:29 IST 2015
[root@dhcp43-191 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt --regenerate-outfile
10.70.42.30 - pre failed: /rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

10.70.42.147 - pre failed: [2015-06-19 17:50:35.959012] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
/rhs/thinbrick2/ozone Historical Changelogs not available: [Errno 2] No such file or directory

Generated output file /tmp/outo1.txt
[root@dhcp43-191 ~]# cat /tmp/outo1.txt 
MODIFY newdir1%2Fdir2%2Fa 
MODIFY level01%2F 
[root@dhcp43-191 ~]# 

Is it expected to get errors like 'changelogs not available' even after rebalance? Can we not have better worded errors , rather than 'pre failed' - gives a better picture to the user..
Comment 4 Sweta Anandpara 2015-06-19 08:24:59 EDT
Missed pasting the client side logs: 

[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# ls -lrt
total 325
-rw-r--r--. 1 root root 102728 Jun 18 18:07 5582ba18%%FTZ39AW4RV.tar.gz
-rwxr-xr-x. 1 u1   g1   102400 Jun 18 18:36 V6MO_newhdln
-rw-r--r--. 1 root root 102400 Jun 18 20:47 5582baeb%%SNRP8ENTE9
lrwxrwxrwx. 1 root root      7 Jun 18 21:00 level20_sln2 -> level20
-rw-r--r--. 1 root root      0 Jun 19 15:46 a
-rw-r--r--. 1 root root      0 Jun 19 15:47 b
drwxr-xr-x. 2 root root  12294 Jun 19  2015 etc
drwxr-xr-x. 2 root root    134 Jun 19  2015 level10
drwxr-xr-x. 2 root root     92 Jun 19  2015 level00
drwxr-xr-x. 2 root root     57 Jun 19  2015 dir1
drwxr-xr-x. 2 u2   g1     8233 Jun 19  2015 level01
drwxr-xr-x. 2 root root    134 Jun 19  2015 level02
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# ls -lrt dir1
total 1
drwxr-xr-x. 2 root root 32 Jun 19  2015 dir2
[root@dhcp43-71 ozone]# ls -lrt dir1/dir2
total 0
-rw-r--r--. 1 root root 0 Jun 19 22:34 a
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# echo "fjdslfjdksljfds" >> dir1/dir2/a
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# mv dir1 newdir1
[root@dhcp43-71 ozone]# ls -lrt
total 329
-rw-r--r--. 1 root root 102728 Jun 18 18:07 5582ba18%%FTZ39AW4RV.tar.gz
-rwxr-xr-x. 1 u1   g1   102400 Jun 18 18:36 V6MO_newhdln
-rw-r--r--. 1 root root 102400 Jun 18 20:47 5582baeb%%SNRP8ENTE9
lrwxrwxrwx. 1 root root      7 Jun 18 21:00 level20_sln2 -> level20
-rw-r--r--. 1 root root      0 Jun 19 15:46 a
-rw-r--r--. 1 root root      0 Jun 19 15:47 b
drwxr-xr-x. 2 root root  12294 Jun 19 23:11 etc
drwxr-xr-x. 3 root root    134 Jun 19 23:11 level10
drwxr-xr-x. 2 root root     92 Jun 19 23:11 level00
drwxr-xr-x. 2 u2   g1     8233 Jun 19 23:11 level01
drwxr-xr-x. 2 root root    134 Jun 19 23:11 level02
drwxr-xr-x. 3 root root     68 Jun 19 23:11 newdir1
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# chmod 744 level01
[root@dhcp43-71 ozone]# ls -lrt
total 329
-rw-r--r--. 1 root root 102728 Jun 18 18:07 5582ba18%%FTZ39AW4RV.tar.gz
-rwxr-xr-x. 1 u1   g1   102400 Jun 18 18:36 V6MO_newhdln
-rw-r--r--. 1 root root 102400 Jun 18 20:47 5582baeb%%SNRP8ENTE9
lrwxrwxrwx. 1 root root      7 Jun 18 21:00 level20_sln2 -> level20
-rw-r--r--. 1 root root      0 Jun 19 15:46 a
-rw-r--r--. 1 root root      0 Jun 19 15:47 b
drwxr-xr-x. 2 root root  12294 Jun 19 23:11 etc
drwxr-xr-x. 2 root root    134 Jun 19 23:11 level10
drwxr-xr-x. 2 root root     92 Jun 19 23:11 level00
drwxr--r--. 3 u2   g1     8233 Jun 19 23:11 level01
drwxr-xr-x. 2 root root    134 Jun 19 23:11 level02
drwxr-xr-x. 3 root root     68 Jun 19 23:11 newdir1
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# mount | grep ozone
10.70.42.202:/ozone on /mnt/ozone type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# 
[root@dhcp43-71 ozone]# pwd
/mnt/ozone
[root@dhcp43-71 ozone]# df -k .
Filesystem          1K-blocks   Used Available Use% Mounted on
10.70.42.202:/ozone  33511424 180608  33330816   1% /mnt/ozone
[root@dhcp43-71 ozone]#
Comment 9 Sweta Anandpara 2015-07-04 03:38:36 EDT
Tested and verified this on the build 3.7.1-7

Every directory creation is recorded as a NEW entry. Rename of a direcotry with same/different name, across same/different location is recorded as a RENAME entry with the correct <old path> and <new path>. Directory creation followed by a rename is recorded as a NEW entry with the new path.

Moving this to fixed in 3.1 everglades. Pasted below are the logs:

SERVER
=========


[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v create ozone replica 2 10.70.43.93:/rhs/thinbrick1/ozone 10.70.43.155:/rhs/thinbrick1/ozonen 10.70.43.93:/rhs/thinbrick2/ozone 10.70.43.155:/rhs/thinbrick2/ozone
volume create: ozone: success: please start the volume to access data
[root@dhcp43-93 ~]# gluster v start ozone
volume start: ozone: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create so1 ozone
Session so1 created with volume ozone
[root@dhcp43-93 ~]# glusterfind create so2 ozone
Session so2 created with volume ozone
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
so2                       ozone                     2015-07-04 18:18:32      
ss1                       slave                     2015-06-27 00:25:26      
so1                       ozone                     2015-07-04 18:18:23      
[root@dhcp43-93 ~]# glusterfind create so3 ozone
Session so3 created with volume ozone
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# # after creating test1, test2, dir1/, dir2/, dir2/dir22/, dir2/dir22/dir23/, dir2/dir22/dir23/dir245/ and renaming dir2/dir22/dir23/dir245/ -> dir2/dir22/dir23/dir24/
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre so1 ozone /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# vi /tmp/out.txt 
[root@dhcp43-93 ~]# cat /tmp/out.txt 
MODIFY .trashcan%2F 
NEW test1 
NEW test2 
NEW dir1 
NEW dir2 
NEW dir2%2F%2Fdir22 
NEW dir2%2Fdir22%2F%2Fdir23 
NEW dir2%2Fdir22%2Fdir23%2F%2Fdir24 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# # after executing mv dir2/dir22/dir23/ dir1/
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post so1 ozone 
Session so1 with volume ozone updated
[root@dhcp43-93 ~]# glusterfind pre so1 ozone /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# vi /tmp/out.txt 
[root@dhcp43-93 ~]# cat /tmp/out.txt 
RENAME dir2%2Fdir22%2F%2Fdir23 dir1%2F%2Fdir23
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post so1 ozone 
Session so1 with volume ozone updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre so2 ozone /tmp/out2.txt
Generated output file /tmp/out2.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cat /tmp/out2.txt 
MODIFY .trashcan%2F 
NEW test1 
NEW test2 
NEW dir1 
NEW dir2 
NEW dir2%2F%2Fdir22 
NEW dir1%2F%2Fdir23 
NEW dir1%2Fdir23%2F%2Fdir24 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# # after creating : dir1/dir23/a, dir2/a, and mv dir1/dir23/ ./dir23_new
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post so1 ozone 
Pre script is not run
[root@dhcp43-93 ~]# glusterfind pre so1 ozone /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cat /tmp/out.txt 
NEW dir23_new%2F%2Fa 
NEW dir2%2F%2Fa 
RENAME dir1%2F%2Fdir23 dir23_new
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# # after doing: mv dir23_new/dir24/ dir2/dir22/, touch dir2/dir22/dir24/file1, mv dir2/dir22/dir24/file1 dir23_new/file1_new
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post so1 ozoen
Session so1 not created with volume ozoen
[root@dhcp43-93 ~]# glusterfind post so1 ozone
Session so1 with volume ozone updated
[root@dhcp43-93 ~]# glusterfind pre so1 ozone /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cat /tmp/out.txt 
RENAME dir23_new%2F%2Fdir24 dir2%2Fdir22%2F%2Fdir24
NEW dir23_new%2F%2Ffile1_new 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post so1 ozone
Session so1 with volume ozone updated
[root@dhcp43-93 ~]# glusterfind pre so2 ozone /tmp/out2.txt
Post command is not run after last pre, use --regenerate-outfile
[root@dhcp43-93 ~]# glusterfind post so2 ozone
Session so2 with volume ozone updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre so2 ozone /tmp/out2.txt
Generated output file /tmp/out2.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cat /tmp/out2.txt 
NEW dir23_new%2F%2Fa 
NEW dir2%2F%2Fa 
RENAME dir1%2F%2Fdir23 dir23_new
RENAME dir23_new%2F%2Fdir24 dir2%2Fdir22%2F%2Fdir24
NEW dir23_new%2F%2Ffile1_new 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.7.1-7.el6rhs.x86_64
glusterfs-server-3.7.1-7.el6rhs.x86_64
glusterfs-3.7.1-7.el6rhs.x86_64
glusterfs-api-3.7.1-7.el6rhs.x86_64
glusterfs-cli-3.7.1-7.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-7.el6rhs.x86_64
glusterfs-libs-3.7.1-7.el6rhs.x86_64
glusterfs-fuse-3.7.1-7.el6rhs.x86_64
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.7.1-7.el6rhs.x86_64
glusterfs-server-3.7.1-7.el6rhs.x86_64
glusterfs-3.7.1-7.el6rhs.x86_64
glusterfs-api-3.7.1-7.el6rhs.x86_64
glusterfs-cli-3.7.1-7.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-7.el6rhs.x86_64
glusterfs-libs-3.7.1-7.el6rhs.x86_64
glusterfs-fuse-3.7.1-7.el6rhs.x86_64
[root@dhcp43-93 ~]# gluster peer status
Number of Peers: 1

Hostname: 10.70.43.155
Uuid: 97f53dc5-1ba1-45dc-acdd-ddf38229035b
State: Peer in Cluster (Connected)
[root@dhcp43-93 ~]# 

CLIENT
=========

[root@dhcp43-59 ~]# mkdir /mnt/oz
[root@dhcp43-59 ~]# mount -t glusterfs 10.70.43.93:/ozone /mnt/oz
[root@dhcp43-59 ~]# cd /mnt/oz
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# ls -a
.  ..  .trashcan
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# echo "what a beautiful day" > test1
[root@dhcp43-59 oz]# echo "hello world" > test2
[root@dhcp43-59 oz]# l s-a
-bash: l: command not found
[root@dhcp43-59 oz]# ls -a
.  ..  test1  test2  .trashcan
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# mkdir dir1
[root@dhcp43-59 oz]# mkdir -p dir2/dir22/dir23/dir245
[root@dhcp43-59 oz]# mv dir2/dir22/dir23/dir245 dir2/dir22/dir23/dir24
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# mv dir2/dir22/dir23/ dir1/
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# cd dir1/dir23/dir24/^C
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# pwd
/mnt/oz
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# ls
dir1  dir2  test1  test2
[root@dhcp43-59 oz]# ls -lrt
total 3
-rw-r--r--. 1 root root 21 Jul  4 18:23 test1
-rw-r--r--. 1 root root 12 Jul  4 18:24 test2
drwxr-xr-x. 3 root root 36 Jul  4 18:24 dir2
drwxr-xr-x. 3 root root 36 Jul  4 18:26 dir1
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# ls -lrt dir1/dir23/dir24/
total 0
[root@dhcp43-59 oz]# ls -lrt dir2/dir22/
total 0
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# touch dir1/dir23/a
[root@dhcp43-59 oz]# touch dir2/a
[root@dhcp43-59 oz]# mv dir1/dir23/ ./dir23_new
[root@dhcp43-59 oz]# ls -a
.  ..  dir1  dir2  dir23_new  test1  test2  .trashcan
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# ls -lrt dir23_new/
a      dir24/ 
[root@dhcp43-59 oz]# ls -lrt dir23_new/dir24/
total 0
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# ls -la
total 10
drwxr-xr-x.  7 root root  178 Jul  4 18:29 .
drwxr-xr-x. 12 root root 4096 Jul  4 18:18 ..
drwxr-xr-x.  2 root root   12 Jul  4 18:29 dir1
drwxr-xr-x.  3 root root   44 Jul  4 18:29 dir2
drwxr-xr-x.  3 root root   44 Jul  4 18:29 dir23_new
-rw-r--r--.  1 root root   21 Jul  4 18:23 test1
-rw-r--r--.  1 root root   12 Jul  4 18:24 test2
drwxr-xr-x.  3 root root   48 Jul  4 18:17 .trashcan
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# touch dir2
dir2/      dir23_new/ 
[root@dhcp43-59 oz]# mv dir23_new/
a      dir24/ 
[root@dhcp43-59 oz]# mv dir23_new/dir24/ dir2
dir2/      dir23_new/ 
[root@dhcp43-59 oz]# mv dir23_new/dir24/ dir2/
a      dir22/ 
[root@dhcp43-59 oz]# mv dir23_new/dir24/ dir2/dir22/
[root@dhcp43-59 oz]# touch dir2/dir22/dir24/file1
[root@dhcp43-59 oz]# mv dir2/dir22/dir24/file1 dir23_new/file1_new
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# ls -lr dir23_new/
total 0
-rw-r--r--. 1 root root 0 Jul  4 18:33 file1_new
-rw-r--r--. 1 root root 0 Jul  4 18:29 a
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# 
[root@dhcp43-59 oz]# rpm -qa | grep gluster
glusterfs-libs-3.7.1-3.el6.x86_64
glusterfs-client-xlators-3.7.1-3.el6.x86_64
glusterfs-fuse-3.7.1-3.el6.x86_64
glusterfs-3.7.1-3.el6.x86_64
[root@dhcp43-59 oz]#
Comment 10 errata-xmlrpc 2015-07-29 01:04:49 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.