Bug 1258831 - [RFE] Primary Slave Node Failure Handling
[RFE] Primary Slave Node Failure Handling
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Ravishankar N
: FutureFeature, Triaged
Depends On:
  Show dependency treegraph
Reported: 2015-09-01 07:32 EDT by Aravinda VK
Modified: 2016-08-16 03:16 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Aravinda VK 2015-09-01 07:32:57 EDT
Description of problem:
When primary slave node which is used in Geo-rep command goes down, geo-rep fails to get other slave nodes information and fails to start Geo-replication.

If Geo-rep is already started and Primary slave node goes down, that worker will remain Faulty since it is unable to get the Other nodes information.

Save slave hosts details in config file. When a worker goes to faulty, it tries to get volume status using --remote-host. Use this pool of hosts for remote-host

Cache the slave nodes/cluster info in config file as slave_nodes. If Primary Slave node is not available, use other available node.

Pseudo code:
Two new config items: prev_main_node, slave_nodes

1. if prev_main_node not in CONFIG:
    Set prev_main_node = Slave node passed in Geo-rep command

2. Try to get Slave Volinfo with prev_main_node
3. If failed, Try to get Slave Volinfo from the node specified in Geo-rep command(If Node specified in Geo-rep command != prev_main_node)
4. If failed, check `slave_nodes` is available in CONFIG
5. If not available, FAIL
6. If available, Try to get Slave Volinfo using any one remote host except previously failed
7. If Volinfo available, match the Slave Vol UUID with the results to make sure it is the same Slave Volume
8. If Volinfo is valid, return it and update prev_main_node in config file and re-update `slave_nodes`
9. If invalid Volinfo FAIL
10. If Volinfo not available(from step 7) from all nodes, FAIL

Note You need to log in before you can comment on or make changes to this bug.