| Summary: | starting glusterd on remote after volume start : brick1 status "started" and brick2 status "stopped" | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Lakshmipathi G <lakshmipathi> |
| Component: | cli | Assignee: | Kaushal <kaushal> |
| Status: | CLOSED WONTFIX | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.2.1 | CC: | gluster-bugs, vijay |
| Target Milestone: | --- | Keywords: | Reopened |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2012-04-27 05:30:01 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Lakshmipathi G
2011-07-29 08:33:56 UTC
To properly resolve this bug, glusterd needs to be synced with other glusterds in the cluster whenever it is started. This causes a considerable overhead. The inconsistent status problem can be fixed by using the "force" option while starting or stopping a volume. Let us keep this open as an enhancement Closing as wontfix. File a new bug if this is required. |