| Summary: | Probing a new RHGS node, which is part of another cluster, should throw proper error message in logs and CLI | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Gaurav Kumar Garg <ggarg> |
| Component: | glusterd | Assignee: | Satish Mohan <smohan> |
| Status: | CLOSED ERRATA | QA Contact: | Byreddy <bsrirama> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.1 | CC: | amukherj, bmohanra, bsrirama, byarlaga, mlawrenc, nlevinki, rcyriac, rhinduja, rhs-bugs, sankarshan, sasundar, smohan, storage-qa-internal, vbellur |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 3.1.3 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | glusterd | ||
| Fixed In Version: | glusterfs-3.7.9-1 | Doc Type: | Bug Fix |
| Doc Text: |
When users attempted to add a node that was already part of another trusted storage pool to a new trusted storage pool with the 'gluster peer probe' command, the command failed, but did not give a clear reason for the failure. An error message has been added so that it is clear when the node is already part of another cluster.
|
Story Points: | --- |
| Clone Of: | 1237022 | Environment: | |
| Last Closed: | 2016-06-23 05:04:52 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 1311817 | ||
|
Comment 3
Gaurav Kumar Garg
2016-03-21 10:46:26 UTC
this bug is to track the fix which was already in 3.1.2 but missed out as part of rebasing to 3.1.3 from upstream 3.7.9 Verified this bug using the build "glusterfs-3.7.9-1" Fix is working properly, The error message when tried to probe a node which is part of other cluster is proper with valid meaning. Moving to verified state. LGTM :) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |