Bug 856121
Summary: | [rhev-gluster] remove bricks doesn't clean brick metadata and brick is not usable | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Haim <hateya> | ||||
Component: | glusterfs | Assignee: | Amar Tumballi <amarts> | ||||
Status: | CLOSED WONTFIX | QA Contact: | Prasanth <pprakash> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 2.0 | CC: | asriram, dpaikov, iheim, pprakash, rhs-bugs, shaines, vbellur, vraman, yeylon, ykaul | ||||
Target Milestone: | --- | Keywords: | FutureFeature | ||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Known Issue | |||||
Doc Text: |
Cause:
volume start creates '.glusterfs' directory in backend export directory. in remove-brick, we don't do operations on backend export, but just change the volume configs to remove the brick...
Consequence:
stale data is present in backend export.
Workaround (if any):
perform a 'rm -rf /export-dir' on server node for cleanup.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2012-12-18 09:59:57 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Haim
2012-09-11 09:01:08 UTC
Created attachment 611695 [details]
engine and gluster logs.
This is the glusterfs behavior, hence changing the product to RHS. AFAIK, this is intended behavior in glusterfs. as of today, yes, it is a know behavior and not removing the brick from backend is intentional. One has to do a 'rm /brick-dir' on the server machine to make the proper cleanup. proposing it to be known issues section. we think a proper cleanup of volume export dir will cause problems... We will plan to support it in future with 'force' key... for now, recommend known-issues section in doc. Marked for known issues. (and the work around exists) ---------- Cause: volume start creates '.glusterfs' directory in backend export directory. in remove-brick, we don't do operations on backend export, but just change the volume configs to remove the brick... Consequence: stale data is present in backend export. Workaround (if any): perform a 'rm -rf /export-dir' on server node for cleanup. ------------ Please re-open if its not sufficient. |