Description of problem: For a node with the passive status, the crawl status is being listed as the "Hybrid Crawl". Where as it should have been "N/A" as it is listed on the other passive node. Version-Release number of selected component (if applicable): glusterfs-3.4.0.42rhs-1.el6rhs.x86_64 How reproducible: I just noticed after a node reboot. Not sure how often. Steps to Reproduce: Not sure about the steps. I just noticed it when I did the following steps. 1. Create and start a geo-rep session between 2*2 dist-rep master and 2*2 dist-rep slave nodes. Make sure to turn on use-tarssh option before starting the geo-rep session. 2. Now copy /etc many time over to the master and also start creating some files with the tar files and start creating small files. 3. Now reboot a *passive* node. When it comes back up, it will still be Passive with crawl status as N/A 4. Now after the passive node is up, after sometime, reboot the Active node. Make sure that during the reboot all the there is no split brain between those bricks. To make sure this. Stop all i/o on mountpoint before this reboot. Actual results: When the Active node comes back online, and after some time, this was the result I saw. [root@mustang ~]# gluster v geo master falcon::slave status detail MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CRAWL STATUS FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING FILES SKIPPED ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mustang.blr.redhat.com master /rhs/bricks/brick1 interceptor::slave Passive Hybrid Crawl 9025 0 0 0 0 harrier.blr.redhat.com master /rhs/bricks/brick2 hornet::slave Active Changelog Crawl 15889 0 0 0 0 spitfire.blr.redhat.com master /rhs/bricks/brick0 falcon::slave Active Changelog Crawl 15609 0 0 0 340 typhoon.blr.redhat.com master /rhs/bricks/brick3 lightning::slave Passive N/A 0 0 0 0 0 Notice that the Crawl Status for one of the Passive node is "Hybrid Crawl" but to the other Passive node it is "N/A" Expected results: Passive node should have "N/A" as their crawl status. Additional info:
Verified with the build: glusterfs-3.7.1-7.el6rhs.x86_64 Tried the steps mentioned in description + rebooted Passive nodes multiple times. The Passive Bricks crawl status remains NA. Moving the bug to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html