Bug 1031970 - Dist-geo-rep : after brick was added to a volume from existing node, all the gsyncd processes in that node will be restarted.
Summary: Dist-geo-rep : after brick was added to a volume from existing node, all the ...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: 2.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: config
Depends On:
Blocks: 1285211
TreeView+ depends on / blocked
 
Reported: 2013-11-19 09:25 UTC by Vijaykumar Koppad
Modified: 2015-11-25 08:51 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1285211 (view as bug list)
Environment:
Last Closed: 2015-11-25 08:49:17 UTC
Embargoed:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2013-11-19 09:25:19 UTC
Description of problem: after brick was added to a volume from existing node, all the gsyncd processes in that node will be restarted.


Version-Release number of selected component (if applicable):glusterfs-3.4.0.44rhs-1


How reproducible: happens every time


Steps to Reproduce:
1.create and start a geo-rep relationship between master and slave. 
2.add brick to the master volume in the existing nodes.
3.

Actual results: all the existing gsyncds on those nodes will be restarted.


Expected results: It shouldn't restart existing gsyncds. 


Additional info:

Comment 3 Aravinda VK 2015-11-25 08:49:17 UTC
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.

Comment 4 Aravinda VK 2015-11-25 08:51:01 UTC
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.


Note You need to log in before you can comment on or make changes to this bug.