Description of problem: After adding bricks to the volume, the status of the existing geo-rep session goes to faulty for a min, and geo-rep logs says, [2013-07-16 16:31:46.466895] E [syncdutils(/bricks/brick2):200:log_raise_exception] <top>: glusterfs session went down [ECONNABORTED] [2013-07-16 16:31:46.467538] I [syncdutils(/bricks/brick2):158:finalize] <top>: exiting. [2013-07-16 16:31:46.481292] I [monitor(monitor):81:set_state] Monitor: new state: faulty Version-Release number of selected component (if applicable):3.4.0.12rhs.beta4-1.el6rhs.x86_64 How reproducible: Happens every time Steps to Reproduce: 1.Create and start a geo-rep relationship between master(dist-rep) and slave. 2.Let status become stable. 3.Now add bricks to the master volume. 4.Check the status of the master volume. Actual results: After add-brick, status become faulty for sometime. Expected results: Geo-rep status should be stable after add-bricks. Additional info:
Tried on glusterfs-3.4.0.15rhs-1
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.