Bug 1029104 - dist-geo-rep: For a node with has passive status, "crawl status" is listed as "Hybrid Crawl"
dist-geo-rep: For a node with has passive status, "crawl status" is listed as...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
high Severity low
: ---
: RHGS 3.1.0
Assigned To: Aravinda VK
Rahul Hinduja
status, node-failover
:
Depends On: 1064309
Blocks: 1202842 1223636
  Show dependency treegraph
 
Reported: 2013-11-11 11:46 EST by M S Vishwanath Bhat
Modified: 2016-05-31 21:56 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.7.0-2.el6rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-07-29 00:29:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description M S Vishwanath Bhat 2013-11-11 11:46:06 EST
Description of problem:
For a node with the passive status, the crawl status is being listed as the "Hybrid Crawl". Where as it should have been "N/A" as it is listed on the other passive node.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.42rhs-1.el6rhs.x86_64


How reproducible:
I just noticed after a node reboot. Not sure how often.

Steps to Reproduce:

Not sure about the steps. I just noticed it when I did the following steps.

1. Create and start a geo-rep session between 2*2 dist-rep master and 2*2 dist-rep slave nodes. Make sure to turn on use-tarssh option before starting the geo-rep session.
2. Now copy /etc many time over to the master and also start creating some files with the tar files and start creating small files.
3. Now reboot a *passive* node. When it comes back up, it will still be Passive with crawl status as N/A
4. Now after the passive node is up, after sometime, reboot the Active node. Make sure that during the reboot all the there is no split brain between those bricks. To make sure this. Stop all i/o on mountpoint before this reboot.

Actual results:
When the Active node comes back online, and after some time, this was the result I saw.


[root@mustang ~]# gluster v geo master falcon::slave status detail
 
MASTER NODE                MASTER VOL    MASTER BRICK          SLAVE                 STATUS     CRAWL STATUS       FILES SYNCD    FILES PENDING    BYTES PENDING    DELETES PENDING    FILES SKIPPED   
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
mustang.blr.redhat.com     master        /rhs/bricks/brick1    interceptor::slave    Passive    Hybrid Crawl       9025           0                0                0                  0               
harrier.blr.redhat.com     master        /rhs/bricks/brick2    hornet::slave         Active     Changelog Crawl    15889          0                0                0                  0               
spitfire.blr.redhat.com    master        /rhs/bricks/brick0    falcon::slave         Active     Changelog Crawl    15609          0                0                0                  340             
typhoon.blr.redhat.com     master        /rhs/bricks/brick3    lightning::slave      Passive    N/A                0              0                0                0                  0 

Notice that the Crawl Status for one of the Passive node is "Hybrid Crawl" but to the other Passive node it is "N/A"

Expected results:
Passive node should have "N/A" as their crawl status.

Additional info:
Comment 4 Rahul Hinduja 2015-07-04 04:36:22 EDT
Verified with the build: glusterfs-3.7.1-7.el6rhs.x86_64

Tried the steps mentioned in description + rebooted Passive nodes multiple times. The Passive Bricks crawl status remains NA.

Moving the bug to verified state
Comment 7 errata-xmlrpc 2015-07-29 00:29:51 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.