Bug 1285211 - Dist-geo-rep : after brick was added to a volume from existing node, all the gsyncd processes in that node will be restarted.
Dist-geo-rep : after brick was added to a volume from existing node, all the ...
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
3.1
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
config
: ZStream
Depends On: 1031970
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-25 03:44 EST by Aravinda VK
Modified: 2015-12-02 00:23 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1031970
Environment:
Last Closed: 2015-12-02 00:23:17 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 2 Aravinda VK 2015-12-02 00:23:17 EST
Expected behavior. In the newly added brick node Geo-rep Monitor process will get restarted, so all the workers in that node also gets restarted.

From documentation, (Chapter 12.5.2)

"When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required. "

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Geo-replication-Starting_Geo-replication_on_a_Newly_Added_Brick.html

Closing this bug, Please reopen if found again.

Note You need to log in before you can comment on or make changes to this bug.