Bug 1006840 - Dist-geo-rep : After data got synced, on slave volume; few directories(owner is non privileged User) have different permission then master volume
Dist-geo-rep : After data got synced, on slave volume; few directories(owner ...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
high Severity medium
: ---
: RHGS 3.1.0
Assigned To: Aravinda VK
Rahul Hinduja
:
Depends On:
Blocks: 1202842 1223636
  Show dependency treegraph
 
Reported: 2013-09-11 07:16 EDT by Rachana Patel
Modified: 2015-07-29 00:28 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.7.0-2.el6rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-07-29 00:28:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rachana Patel 2013-09-11 07:16:47 EDT
Description of problem:
 Dist-geo-rep : After data got synced, on slave volume; few directories(owner is non privileged User) have different permission then master volume  

Version-Release number of selected component (if applicable):
3.4.0.32rhs-1.el6rhs.x86_64

How reproducible:
haven't tried

Steps to Reproduce:
1.create geo rep session between master and slave volume(master volume is empty, no data on that volume)
2.now start creating data on master volume 
3. while data creation is in progress. add node to master cluster and brick from that node.(Brick7: 10.70.37.106:/rhs/brick2/1
Brick8: 10.70.37.106:/rhs/brick2/2)
4. start rebalance process for master volume
5. after some time stiop creating data on master volume.
6. once all data is synced to slave(no more changelog present in .processing dir on any RHSS in master cluster) - 

verify data on master and slave cluster

[root@4VM4 ~]# gluster volume info add_change
 
Volume Name: add_change
Type: Distributed-Replicate
Volume ID: 783e19fe-7228-40e1-a74f-f018f33a55f2
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.37.148:/rhs/brick2/1
Brick2: 10.70.37.210:/rhs/brick2/1
Brick3: 10.70.37.202:/rhs/brick2/1
Brick4: 10.70.37.148:/rhs/brick2/2
Brick5: 10.70.37.210:/rhs/brick2/2
Brick6: 10.70.37.202:/rhs/brick2/2
Brick7: 10.70.37.106:/rhs/brick2/1
Brick8: 10.70.37.106:/rhs/brick2/2
Options Reconfigured:
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on

[root@4VM4 ~]# gluster volume geo add_change status
NODE                           MASTER        SLAVE                                              HEALTH    UPTIME               
---------------------------------------------------------------------------------------------------------------------------
4VM4.lab.eng.blr.redhat.com    add_change    ssh://rhsauto031.lab.eng.blr.redhat.com::change    Stable    1 day 17:06:48       
4VM1.lab.eng.blr.redhat.com    add_change    ssh://rhsauto031.lab.eng.blr.redhat.com::change    Stable    03:21:58             
4VM2.lab.eng.blr.redhat.com    add_change    ssh://rhsauto031.lab.eng.blr.redhat.com::change    Stable    1 day 16:49:12       
4VM3.lab.eng.blr.redhat.com    add_change    ssh://rhsauto031.lab.eng.blr.redhat.com::change    Stable    03:19:23

slave volume info:-
[root@rhsauto027 ~]# gluster v info change
 
Volume Name: change
Type: Distribute
Volume ID: f5518c10-6f76-436d-a827-481da4b895b9
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhsauto027.lab.eng.blr.redhat.com:/rhs/brick5/3
Brick2: rhsauto026.lab.eng.blr.redhat.com:/rhs/brick5/3
Brick3: rhsauto031.lab.eng.blr.redhat.com:/rhs/brick5/3

Actual results:
master volume                                                   | slave volume
/deep/1:                                                        |   /deep/1:
  total 352                                                        |  total 264                                                        
  drwxrwxr-x  13 502 502 32768 Sep  9 17:07 2                      |  drwxrwxr-x  13 502 502 24576 Sep  9 12:52 2                      
  drwxrwxr-x 107 502 502 32768 Sep  9 15:49 etc1                   |  drwxr-xr-x 107 502 502 24576 Sep  9 11:48 etc1                   
  drwxrwxr-x 107 502 502 32768 Sep  9 16:23 etc10                  |  drwxr-xr-x 107 502 502 24576 Sep  9 12:08 etc10                  
  drwxrwxr-x 107 502 502 32768 Sep  9 15:58 etc2                   |  drwxrwxr-x 107 502 502 24576 Sep  9 11:55 etc2                   
  drwxrwxr-x 107 502 502 32768 Sep  9 15:50 etc3                   |  drwxr-xr-x 107 502 502 24576 Sep  9 11:48 etc3                   
  drwxrwxr-x 107 502 502 32768 Sep  9 15:57 etc4                   |  drwxr-xr-x 107 502 502 24576 Sep  9 11:54 etc4                   
  drwxrwxr-x 107 502 502 32768 Sep  9 16:03 etc5                   |  drwxr-xr-x 107 502 502 24576 Sep  9 11:55 etc5                   
  drwxrwxr-x 107 502 502 32768 Sep  9 16:07 etc6                   |  drwxr-xr-x 107 502 502 24576 Sep  9 11:55 etc6                   
  drwxrwxr-x 107 502 502 32768 Sep  9 16:11 etc7                   |  drwxr-xr-x 107 502 502 24576 Sep  9 11:57 etc7                   
  drwxrwxr-x 107 502 502 32768 Sep  9 16:15 etc8                   |  drwxr-xr-x 107 502 502 24576 Sep  9 12:01 etc8                   
  drwxrwxr-x 107 502 502 32768 Sep  9 16:19 etc9 


Expected results:


Additional info:
Comment 5 Scott Haines 2013-09-27 13:08:12 EDT
Targeting for 3.0.0 (Denali) release.
Comment 6 Nagaprasad Sathyanarayana 2014-05-06 07:43:38 EDT
Dev ack to 3.0 RHS BZs
Comment 14 Rahul Hinduja 2015-07-16 14:17:49 EDT
Verified with build: glusterfs-3.7.1-10.el6.x86_64

Carried the steps mentioned in description. Files were synced to slave and arequal matches after adding bricks from existing and new node followed by rebalance.

Moving bug to verified state
Comment 17 errata-xmlrpc 2015-07-29 00:28:54 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.