Bug 1006729 - Dist-geo-rep : first xsync crawl failed to get the meta data changes made to directories.
Dist-geo-rep : first xsync crawl failed to get the meta data changes made to ...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
medium Severity medium
: ---
: RHGS 3.0.0
Assigned To: Venky Shankar
Bhaskar Bandari
:
Depends On:
Blocks: 1106604
  Show dependency treegraph
 
Reported: 2013-09-11 04:03 EDT by Vijaykumar Koppad
Modified: 2015-05-15 13:43 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.6.0.18-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1106604 (view as bug list)
Environment:
Last Closed: 2014-09-22 15:28:50 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-09-11 04:03:52 EDT
Description of problem: The metadata changes made to directories through cifs mount won't get synced through xsync crawl. 

In corresponding XSYNC-CHANGELOG didn't have any entry for "M " for  directories. 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[root@redwood xsync]# grep "M " XSYNC-CHANGELOG.1378903754 
[root@redwood xsync]# 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
on master 

[root@redwood ~]# ./arequal-checksum /mnt/master/

Entry counts
Regular files   : 6000
Directories     : 304
Symbolic links  : 6505
Other           : 0
Total           : 12809

Metadata checksums
Regular files   : 53ae490
Directories     : 8c3b72
Symbolic links  : 5a815a
Other           : 3e9

Checksums
Regular files   : 88916fc03f4d579a1eb1eb8d2069fffc
Directories     : 6679163d3d6b0d4c
Symbolic links  : 21b555c7d2d176c
Other           : 0
Total           : f242c72c5f62b246


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

on slave

[root@redmoon ~]# ./arequal-checksum  /mnt/slave/

Entry counts
Regular files   : 6000
Directories     : 304
Symbolic links  : 6505
Other           : 0
Total           : 12809

Metadata checksums
Regular files   : 53ae490
Directories     : 3e9
Symbolic links  : 5a815a
Other           : 3e9

Checksums
Regular files   : 88916fc03f4d579a1eb1eb8d2069fffc
Directories     : 6679163d3d6b0d4c
Symbolic links  : 21b555c7d2d176c
Other           : 0
Total           : f242c72c5f62b246

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Version-Release number of selected component (if applicable):glusterfs-3.4.0.33rhs-1.el6rhs.x86_64


How reproducible: didn't try to reproduce the issue  


Steps to Reproduce:
1. create and start a geo-rep relationship between master and slave. 
2. create 1000 file through cifs mount on master and sync the files to slave.
3. stop geo-rep session.
4. change permission randomly on all the files created including the directories. 
5. start the geo-rep session and let it sync to slave. 
6. Check the arequal-checksum of both master and slave. 


Actual results: meta data changes made to directories didn't get sync to slave through xsync crawl. 


Expected results: Meta data change made to directories should sync too. 


Additional info:
Comment 2 Scott Haines 2013-09-27 13:08:11 EDT
Targeting for 3.0.0 (Denali) release.
Comment 3 Amar Tumballi 2013-11-02 12:36:34 EDT
can this be tested with 3.4.0.37rhs or later? because we have made some fixes to metadata syncing in this part of the code.
Comment 4 Vijaykumar Koppad 2013-11-08 03:38:04 EST
This has happened with the build 3.4.0.39rhs, and even with fuse mount.
Comment 5 Nagaprasad Sathyanarayana 2014-05-06 07:43:42 EDT
Dev ack to 3.0 RHS BZs
Comment 9 Aravinda VK 2014-06-13 03:28:11 EDT
Downstream patch sent for review: https://code.engineering.redhat.com/gerrit/#/c/26832/
Comment 10 ssamanta 2014-06-13 07:47:08 EDT
As discussed with the Engineering and Product Management marking this BZ as blocker.
Comment 11 Vijaykumar Koppad 2014-06-19 07:59:09 EDT
Tried on the build - glusterfs-3.6.0.18-1.el6rhs

It syncs all the data and meta-data, but it takes lot of time. 

# gluster v geo master 10.70.43.111::slave status

MASTER NODE                 MASTER VOL    MASTER BRICK                 SLAVE                  STATUS     CHECKPOINT STATUS                                                           CRAWL STATUS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
targarean.blr.redhat.com    master        /bricks/brick1/master_b1     10.70.43.131::slave    Active     checkpoint as of 2014-06-19 12:26:45 is completed at 2014-06-19 14:54:43    Hybrid Crawl
targarean.blr.redhat.com    master        /bricks/brick2/master_b5     10.70.43.131::slave    Active     checkpoint as of 2014-06-19 12:26:45 is completed at 2014-06-19 14:54:43    Hybrid Crawl
targarean.blr.redhat.com    master        /bricks/brick3/master_b9     10.70.43.131::slave    Active     checkpoint as of 2014-06-19 12:26:45 is completed at 2014-06-19 14:54:43    Hybrid Crawl
stark.blr.redhat.com        master        /bricks/brick1/master_b3     10.70.42.236::slave    Active     checkpoint as of 2014-06-19 12:26:45 is completed at 2014-06-19 14:53:59    Hybrid Crawl
stark.blr.redhat.com        master        /bricks/brick2/master_b7     10.70.42.236::slave    Active     checkpoint as of 2014-06-19 12:26:45 is completed at 2014-06-19 14:53:59    Hybrid Crawl
stark.blr.redhat.com        master        /bricks/brick3/master_b11    10.70.42.236::slave    Active     checkpoint as of 2014-06-19 12:26:45 is completed at 2014-06-19 14:53:59    Hybrid Crawl
spiderman.blr.redhat.com    master        /bricks/brick1/master_b4     10.70.43.165::slave    Passive    N/A                                                                         N/A
spiderman.blr.redhat.com    master        /bricks/brick2/master_b8     10.70.43.165::slave    Passive    N/A                                                                         N/A
spiderman.blr.redhat.com    master        /bricks/brick3/master_b12    10.70.43.165::slave    Passive    N/A                                                                         N/A
shaktiman.blr.redhat.com    master        /bricks/brick1/master_b2     10.70.43.111::slave    Passive    N/A                                                                         N/A
shaktiman.blr.redhat.com    master        /bricks/brick2/master_b6     10.70.43.111::slave    Passive    N/A                                                                         N/A
shaktiman.blr.redhat.com    master        /bricks/brick3/master_b10    10.70.43.111::slave    Passive    N/A                                                                         N/A



from checkpoint output, it looks like, it took around 2 and half hours to completely sync meta-data changes to 5K files. 

Since it has synced it eventually verifying this bug. Tracking the performance issue with Bug 1111171
Comment 16 errata-xmlrpc 2014-09-22 15:28:50 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html

Note You need to log in before you can comment on or make changes to this bug.