Bug 1581047 - [geo-rep+tiering]: Hot and Cold tier brick changelogs report rsync failure
Summary: [geo-rep+tiering]: Hot and Cold tier brick changelogs report rsync failure
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: Sunny Kumar
QA Contact: Rochelle
URL:
Whiteboard:
Depends On:
Blocks: 1293332 1503137 1597563
TreeView+ depends on / blocked
 
Reported: 2018-05-22 05:27 UTC by Rochelle
Modified: 2018-09-14 04:11 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.12.2-14
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1597563 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:48:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:49:57 UTC

Description Rochelle 2018-05-22 05:27:36 UTC
Description of problem:
=======================
The following message -- "changelogs could not be processed completely"..for a brick that was in the hot tier as well as cold tier 

Master volume:
--------------
Volume Name: master
Type: Tier
Volume ID: c6233039-bcdc-4b3c-b2b9-e4d7a7bccb4e
Status: Started
Snapshot Count: 0
Number of Bricks: 9
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 3
Brick1: 10.70.43.190:/rhs/brick2/b9
Brick2: 10.70.42.58:/rhs/brick2/b8
Brick3: 10.70.42.29:/rhs/brick2/b7
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick4: 10.70.42.29:/rhs/brick1/b1
Brick5: 10.70.42.58:/rhs/brick1/b2
Brick6: 10.70.43.190:/rhs/brick1/b3
Brick7: 10.70.41.160:/rhs/brick1/b4
Brick8: 10.70.42.79:/rhs/brick1/b5
Brick9: 10.70.42.200:/rhs/brick1/b6
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
cluster.tier-mode: cache
features.ctr-enabled: on
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable


Changelog not processed completely:
1. From hot tier

ssh%3A%2F%2Froot%4010.70.42.53%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log:[2018-05-18 05:58:58.644992] E [master(/rhs/brick2/b7):1249:process] _GMaster: changelogs could not be processed completely - moving on...	files=['CHANGELOG.1526621118', 'CHANGELOG.1526621133', 'CHANGELOG.1526621148', 'CHANGELOG.1526621163', 'CHANGELOG.1526621178', 'CHANGELOG.1526621194', 'CHANGELOG.1526621209', 'CHANGELOG.1526621224', 'CHANGELOG.1526621239', 'CHANGELOG.1526621254', 'CHANGELOG.1526621269', 'CHANGELOG.1526621284', 'CHANGELOG.1526621299', 'CHANGELOG.1526621314', 'CHANGELOG.1526621329']

2. From cold tier

ssh%3A%2F%2Froot%4010.70.42.53%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log:[2018-05-18 06:09:00.369133] E [master(/rhs/brick1/b1):1249:process] _GMaster: changelogs could not be processed completely - moving on...	files=['CHANGELOG.1526621118', 'CHANGELOG.1526621133', 'CHANGELOG.1526621148', 'CHANGELOG.1526621163', 'CHANGELOG.1526621178', 'CHANGELOG.1526621194', 'CHANGELOG.1526621209']

--------------------------------------------------------------------------------

Version-Release number of selected component (if applicable):
=============================================================
[root@dhcp42-29 tmp]# rpm -qa | grep gluster
vdsm-gluster-4.19.43-2.3.el7rhgs.noarch
glusterfs-cli-3.12.2-10.el7rhgs.x86_64
glusterfs-server-3.12.2-10.el7rhgs.x86_64
python2-gluster-3.12.2-10.el7rhgs.x86_64
glusterfs-rdma-3.12.2-10.el7rhgs.x86_64
glusterfs-libs-3.12.2-10.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-10.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.4.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-3.12.2-10.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
glusterfs-api-3.12.2-10.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-10.el7rhgs.x86_64
glusterfs-fuse-3.12.2-10.el7rhgs.x86_64

How reproducible:
=================
1/1


Steps to Reproduce:
===================
1. Create Master and a Slave cluster from 6 nodes (each)
2. Create and Start master volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
4. Create and Start slave volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
5. Enable quota on master volume 
6. Enable shared storage on master volume
7. Setup geo-rep session between master and slave volume 
8. Mount master volume on client 
9. Create data from master client
10. Arequal checksum matches, data was synced

Actual results:
===============
Hot and cold tier brick changelogs report rsync failure

Expected results:
================
There should be no rsync failure

Comment 20 errata-xmlrpc 2018-09-04 06:48:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.