Bug 1031970

Summary: Dist-geo-rep : after brick was added to a volume from existing node, all the gsyncd processes in that node will be restarted.
Product: Red Hat Gluster Storage Reporter: Vijaykumar Koppad <vkoppad>
Component: geo-replicationAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.1CC: avishwan, chrisw, csaba, david.macdonald
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: config
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1285211 (view as bug list) Environment:
Last Closed: 2015-11-25 08:49:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1285211    

Description Vijaykumar Koppad 2013-11-19 09:25:19 UTC
Description of problem: after brick was added to a volume from existing node, all the gsyncd processes in that node will be restarted.


Version-Release number of selected component (if applicable):glusterfs-3.4.0.44rhs-1


How reproducible: happens every time


Steps to Reproduce:
1.create and start a geo-rep relationship between master and slave. 
2.add brick to the master volume in the existing nodes.
3.

Actual results: all the existing gsyncds on those nodes will be restarted.


Expected results: It shouldn't restart existing gsyncds. 


Additional info:

Comment 3 Aravinda VK 2015-11-25 08:49:17 UTC
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.

Comment 4 Aravinda VK 2015-11-25 08:51:01 UTC
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.