This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1031664 - dist-geo-rep: Slave has more files than master after complete sync without any deletes/renames happening from master side
dist-geo-rep: Slave has more files than master after complete sync without an...
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
consistency
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-18 08:36 EST by M S Vishwanath Bhat
Modified: 2016-05-31 21:57 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description M S Vishwanath Bhat 2013-11-18 08:36:22 EST
Description of problem:
After complete geo-rep data sync, slave has more files than master. There were some node reboots. But there were no deletes or renames done of master volume's data set.


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.44rhs-1.el6rhs.x86_64

How reproducible:
Not sure. Was able to reproduce only twice. Once with glusterfs-3.4.0.43rhs and  now with glusterfs-3.4.0.44rhs-1.el6rhs.x86_64

Steps to Reproduce:
1. Create and start a geo-rep session between using the tar+ssh method from 2*2 dist-rep master and 2*2 dist-rep slave.
2. Now keep rebooting a single node many times while the session is in progress and the data is being created. Use smallfiles_cli.py and crefi.py for data creation.
3. After the data creation is over, set the checkpoint.

Actual results:
After checkpoint is completed, this was the arequal-checksum on master and slave.

[root@lightning master]# cat /tmp/areequal-checksum-master 
egular files   : 457616
Directories     : 2062
Symbolic links  : 2940
Other           : 0
Total           : 462618

Metadata checksums
Regular files   : 92e9
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 7d4ed1e031bee8bf2610acbb012a3a32
Directories     : 306120322c56586f
Symbolic links  : a047250611b353f
Other           : 0



[root@lightning master]# cat /tmp/areequal-checksum-slave 
Entry counts
Regular files   : 457618
Directories     : 2062
Symbolic links  : 2940
Other           : 0
Total           : 462620

Metadata checksums
Regular files   : 92e9
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 7d4ed1e031bee8bf2610acbb012a3a32
Directories     : 5866601c533a3d1d
Symbolic links  : a047250611b353f
Other           : 0
Total           : 93c6f1702b5daaf



There are two more regular files in slave.

[root@lightning master]# diff /mnt/master/find-master /tmp/find-slave 
60990a60991
> ./level00/5283d7b8~~IVNU5550EX
211516a211518
> ./level03/52842dd6~~2D6F4SGUR2


These were the gfid of the files in slave

[root@lightning slave]# getfattr -d -n glusterfs.gfid -e hex /mnt/slave/level00/5283d7b8~~IVNU5550EX
getfattr: Removing leading '/' from absolute path names
# file: mnt/slave/level00/5283d7b8~~IVNU5550EX
glusterfs.gfid=0x6f7b88f83bf64db7a01ae54f8a45cec5

[root@lightning slave]# getfattr -d -n glusterfs.gfid -e hex /mnt/slave/level03/52842dd6~~2D6F4SGUR2
getfattr: Removing leading '/' from absolute path names
# file: mnt/slave/level03/52842dd6~~2D6F4SGUR2
glusterfs.gfid=0x3dd0abbdedab4ef58299c4ecd35e96fc


[root@spitfire ]# gluster v geo master falcon::slave status detail
 
MASTER NODE                MASTER VOL    MASTER BRICK          SLAVE                 STATUS     CHECKPOINT STATUS                                                           CRAWL STATUS       FILES SYNCD    FILES PENDING    BYTES PENDING    DELETES PENDING    FILES SKIPPED   
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
spitfire.blr.redhat.com    master        /rhs/bricks/brick0    falcon::slave         Active     checkpoint as of 2013-11-14 16:25:54 is completed at 2013-11-14 16:26:05    Changelog Crawl    428809         0                0                0                  0               
harrier.blr.redhat.com     master        /rhs/bricks/brick2    hornet::slave         Active     checkpoint as of 2013-11-14 16:25:54 is completed at 2013-11-14 21:44:26    Changelog Crawl    274809         0                0                0                  191             
mustang.blr.redhat.com     master        /rhs/bricks/brick1    interceptor::slave    Passive    N/A                                                                         N/A                0              0                0                0                  0               
typhoon.blr.redhat.com     master        /rhs/bricks/brick3    lightning::slave      Passive    N/A                                                                         N/A                385350         0                0                0                  0               



And on master the files were not present at all.



Expected results:
Slave should not have more files than master. They should be exactly in sync.

Additional info:


I will archive all the files from master and slave. Possibly the gfid's of all the files in master and slave as well.
Comment 2 Aravinda VK 2015-11-25 03:50:16 EST
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.
Comment 3 Aravinda VK 2015-11-25 03:51:43 EST
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.

Note You need to log in before you can comment on or make changes to this bug.