Red Hat Bugzilla – Bug 985236
Dist-geo-rep : After add-brick ,no working_dir created for the added bricks
Last modified: 2014-08-24 20:50:12 EDT
Description of problem: After add-brick , geo-rep should create a working_dir for added bricks in /var/run/gluster/<volname>/ssh%3A%2F%2Froot%40<slave-host>%3Agluster%3A%2F%2F127.0.0.1%3A<slave-vol>/ .
Version-Release number of selected component (if applicable):18.104.22.168rhs.beta4-1.el6rhs.x86_64
How reproducible: Happens every time
Steps to Reproduce:
1.Create a geo-rep relationship between master(DIST_REP) and slave
2.Add-bricks to the volume.
3.Check for a new working directory for the brick in the path /var/run/gluster/<volname>/ssh%3A%2F%2Froot%40<slavehost>%3Agluster%3A%2F%2F127.0.0.1%3A<slave-vol>/
Actual results: After add-brick, it doesn't create working dir for the added bricks
Expected results: Geo-rep should create a working dir for the added brick
After a new node(new brick being part of the new node) is added, and an add-brick is done, following commands need to be executed again:
1. gluster system:: execute gsec_create
2. geo-rep create push-pem force
3. geo-rep start force.
This will make sure that the working_dir is created on the new node and that the new brick is part of geo-replication.
However if the new brick added is part of one of the existing nodes, in order to make it a part of the geo-replication, the current geo-rep session needs to be stopped and started. The existing session after being restarted(stop and start), will take care of the working_dir and there is no need to delete the session and create a new one.
verified on glusterfs-22.214.171.124rhs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
For information on the advisory, and where to find the updated files, follow the link below.
If the solution does not work for you, open a new bug report.