Description of problem: Fresh geo-replication fails to copy directory group to slave. Version-Release number of selected component (if applicable): 3.4.0.61geo-1.el6rhs.1.hotfix.sfdc01121015.sfdc0117419 Steps to Reproduce: (Custoemrs words) 1.We deleted the geo-replication sessions, destroyed the slave volumes, reformatted the brick file systems, rebuilt the slave volumes, and recreated geo-replication sessions to sync to the completely empty slave volumes. 2.After the hybrid crawl finished, we used a read-only rsync command to compare the master and slave. Actual results:We found that a large proportion of directories copied to the slave volume did not have the correct group. The GID of the directory in the slave volume was instead set to the UID of the directory owner. *Files* did not exhibit this behavior, just directories. Expected results: UID and GID on files and folders should remain unchanged Additional info:
Patches sent to downstream: https://code.engineering.redhat.com/gerrit/#/c/37445/ https://code.engineering.redhat.com/gerrit/#/c/37540/
Verified on glusterfs-3.4.0.71 # cd /mnt/master # mkdir deep1 # useradd shilpa # chown -R shilpa deep1 # su shilpa # mkdir deep2 On master: # ls -l drwxr-xr-x 2 shilpa root 12 Jan 9 07:05 deep1 $ ls -l /mnt/master/level0/ total 0 drwxrwxr-x 3 shilpa shilpa 38 Jan 9 07:14 level1 On slave: # ls -l /mnt/slave drwxr-xr-x 3 shilpa root 36 Jan 9 07:08 deep1 $ ls -l /mnt/slave/level0/ total 0 drwxrwxr-x 3 shilpa shilpa 38 Jan 9 07:14 level1
Hi Aravinda, I removed some internal details of how geo-rep works. I've edited the doc text. Can you please review it and sign off?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0095.html