Red Hat Bugzilla – Bug 984928
Dist-geo-rep : After add-brick geo-rep session goes to faulty for sometime.
Last modified: 2015-11-25 03:50:25 EST
Description of problem: After adding bricks to the volume, the status of the existing geo-rep session goes to faulty for a min, and geo-rep logs says,
[2013-07-16 16:31:46.466895] E [syncdutils(/bricks/brick2):200:log_raise_exception] <top>: glusterfs session went down
[2013-07-16 16:31:46.467538] I [syncdutils(/bricks/brick2):158:finalize] <top>: exiting.
[2013-07-16 16:31:46.481292] I [monitor(monitor):81:set_state] Monitor: new state: faulty
Version-Release number of selected component (if applicable):184.108.40.206rhs.beta4-1.el6rhs.x86_64
How reproducible: Happens every time
Steps to Reproduce:
1.Create and start a geo-rep relationship between master(dist-rep) and slave.
2.Let status become stable.
3.Now add bricks to the master volume.
4.Check the status of the master volume.
Actual results: After add-brick, status become faulty for sometime.
Expected results: Geo-rep status should be stable after add-bricks.
Tried on glusterfs-220.127.116.11rhs-1
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.