Description of problem: In the distributed geo-replication, geo-rep status returns with 0 return code for failure. Version-Release number of selected component (if applicable): glusterfs-3.4.0.57rhs-1.el6rhs.x86_64 How reproducible: Always Steps to Reproduce: 1. Run geo-rep status with invalid session. 2. 3. Actual results: [root@spitfire ]# gluster v geo master falcon::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ------------------------------------------------------------------------------------------------------------------------------------- spitfire.blr.redhat.com master /rhs/bricks/brick0 falcon::slave Active N/A Changelog Crawl harrier.blr.redhat.com master /rhs/bricks/brick2 hornet::slave Active N/A Changelog Crawl typhoon.blr.redhat.com master /rhs/bricks/brick3 lightning::slave Passive N/A N/A mustang.blr.redhat.com master /rhs/bricks/brick1 interceptor::slave Passive N/A N/A [root@spitfire ]# echo $? 0 [root@spitfire ]# gluster v geo master falcon::slave1 status No active geo-replication sessions between master and falcon::slave1 [root@spitfire ]# echo $? 0 [root@spitfire ]# gluster v geo master falcon::slave1 status detail No active geo-replication sessions between master and falcon::slave1 [root@spitfire ]# echo $? 0 Expected results: It should return with non-zero return code. Additional info:
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.