Bug 985236 - Dist-geo-rep : After add-brick ,no working_dir created for the added bricks
Dist-geo-rep : After add-brick ,no working_dir created for the added bricks
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: Avra Sengupta
Vijaykumar Koppad
:
Depends On: 984942
Blocks: 989532
  Show dependency treegraph
 
Reported: 2013-07-17 02:59 EDT by Vijaykumar Koppad
Modified: 2014-08-24 20:50 EDT (History)
10 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.15rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 989532 (view as bug list)
Environment:
Last Closed: 2013-09-23 18:38:45 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-07-17 02:59:53 EDT
Description of problem: After add-brick , geo-rep  should create a  working_dir for added bricks in /var/run/gluster/<volname>/ssh%3A%2F%2Froot%40<slave-host>%3Agluster%3A%2F%2F127.0.0.1%3A<slave-vol>/ . 

Version-Release number of selected component (if applicable):3.4.0.12rhs.beta4-1.el6rhs.x86_64


How reproducible: Happens every time


Steps to Reproduce:
1.Create a geo-rep relationship between master(DIST_REP) and slave
2.Add-bricks to the volume.
3.Check for a  new working directory for the brick in the path /var/run/gluster/<volname>/ssh%3A%2F%2Froot%40<slavehost>%3Agluster%3A%2F%2F127.0.0.1%3A<slave-vol>/

Actual results: After add-brick, it doesn't create working dir for the added bricks 


Expected results: Geo-rep should create a working dir for the added brick


Additional info:
Comment 2 Avra Sengupta 2013-07-23 03:15:53 EDT
After a new node(new brick being part of the new node) is added, and an add-brick is done, following commands need to be executed again:
1. gluster system:: execute gsec_create
2. geo-rep create push-pem force
3. geo-rep start force.

This will make sure that the working_dir is created on the new node and that the new brick is part of geo-replication.

However if the new brick added is part of one of the existing nodes, in order to make it a part of the geo-replication, the current geo-rep session needs to be stopped and started. The existing session after being restarted(stop and start), will take care of the working_dir and there is no need to delete the session and create a new one.
Comment 4 Vijaykumar Koppad 2013-08-05 04:39:18 EDT
verified on glusterfs-3.4.0.15rhs
Comment 5 Scott Haines 2013-09-23 18:38:45 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html
Comment 6 Scott Haines 2013-09-23 18:41:29 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.