Description of problem: If there multiple slaves from a same and if the all status of the all the geo-rep session from that volume is requested, it doesn't gives mixed up status, like NODE MASTER SLAVE HEALTH UPTIME ---------------------------------------------------------------------------------------------------- shaktiman.blr.redhat.com mastervol ssh://10.70.43.23::slavevol Not Started N/A shaktiman.blr.redhat.com mastervol ssh://10.70.43.23::imaster Stable 21:30:34 stark.blr.redhat.com mastervol ssh://10.70.43.23::slavevol Not Started N/A stark.blr.redhat.com mastervol ssh://10.70.43.23::imaster Stable 21:30:33 spartacus.blr.redhat.com mastervol ssh://10.70.43.23::slavevol Not Started N/A spartacus.blr.redhat.com mastervol ssh://10.70.43.23::imaster Stable 21:43:14 snow.blr.redhat.com mastervol ssh://10.70.43.23::slavevol Not Started N/A snow.blr.redhat.com mastervol ssh://10.70.43.23::imaster Stable 21:43:14 it shouldn't be like above, status of each geo-rep relationship logically separated to have more clarity . Version-Release number of selected component (if applicable):3.4.0.12rhs.beta6-1.el6rhs.x86_64 How reproducible: Happens everytime Steps to Reproduce: 1.Create and start one to many geo-rep relationship. 2.Check the status of the geo-rep 3. Actual results: It give more cluttered status Expected results: Each geo-rep sessions should be logically separated. Additional info:
Per discussion with Amar/Venky, wontfix in cli.