Bug 1285211 - Dist-geo-rep : after brick was added to a volume from existing node, all the gsyncd processes in that node will be restarted.
Summary: Dist-geo-rep : after brick was added to a volume from existing node, all the ...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: config
Depends On: 1031970
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-25 08:44 UTC by Aravinda VK
Modified: 2015-12-02 05:23 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1031970
Environment:
Last Closed: 2015-12-02 05:23:17 UTC
Embargoed:


Attachments (Terms of Use)

Comment 2 Aravinda VK 2015-12-02 05:23:17 UTC
Expected behavior. In the newly added brick node Geo-rep Monitor process will get restarted, so all the workers in that node also gets restarted.

From documentation, (Chapter 12.5.2)

"When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required. "

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Geo-replication-Starting_Geo-replication_on_a_Newly_Added_Brick.html

Closing this bug, Please reopen if found again.


Note You need to log in before you can comment on or make changes to this bug.