Bug 1285211

Summary: Dist-geo-rep : after brick was added to a volume from existing node, all the gsyncd processes in that node will be restarted.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Aravinda VK <avishwan>
Component: geo-replicationAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED NOTABUG QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.1CC: avishwan, chrisw, csaba, david.macdonald, nlevinki, rhs-bugs, storage-qa-internal, vkoppad
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: config
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1031970 Environment:
Last Closed: 2015-12-02 05:23:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1031970    
Bug Blocks:    

Comment 2 Aravinda VK 2015-12-02 05:23:17 UTC
Expected behavior. In the newly added brick node Geo-rep Monitor process will get restarted, so all the workers in that node also gets restarted.

From documentation, (Chapter 12.5.2)

"When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required. "

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Geo-replication-Starting_Geo-replication_on_a_Newly_Added_Brick.html

Closing this bug, Please reopen if found again.