Bug 1234445
Summary: | Remove brick status after retain/stop of the same task | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Lubos Trilety <ltrilety> |
Component: | rhsc | Assignee: | Ramesh N <rnachimu> |
Status: | CLOSED CANTFIX | QA Contact: | RHS-C QE <rhsc-qe-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | rhgs-3.1 | CC: | anbabu, asriram, ltrilety, nlevinki, rhs-bugs, sabose, sankarshan |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
The task-id corresponding to the previously performed retain/stop remove-brick is preserved by engine. When a user queries for remove-brick status, it passes the bricks of both the previous remove-brick as well as the current bricks to the status command. The UI returns the error "Could not fetch remove brick status of volume."
In Gluster, once a remove-brick has been stopped, the status can't be obtained.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-01-29 15:13:47 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1216951 |
Description
Lubos Trilety
2015-06-22 14:29:11 UTC
Please attach engine logs Engine logs: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1234445/sosreport-LogCollector-20150629132659.tar.xz I tried opening the logs attached but looks like I don't have the required permissions. Please check if the remove brick status gluster command with "--xml" returns the status correctly. If the command with the "--xml" doesn't return anything then this is the expected behavior from RHGSC (In reply to anmol babu from comment #3) > I tried opening the logs attached but looks like I don't have the required > permissions. Please check if the remove brick status gluster command with > "--xml" returns the status correctly. If the command with the "--xml" > doesn't return anything then this is the expected behavior from RHGSC It works fine: # gluster volume remove-brick dis-vol <IP>:/rhgs/brick3/brick3 status --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRemoveBrick> <task-id>c990b0d6-9912-4815-873e-9cbefe66822c</task-id> <nodeCount>4</nodeCount> <node> <nodeName>IP</nodeName> <id>272d26a1-0f96-46b6-9aa4-7fec13b62ebb</id> <files>4</files> <size>2097152000</size> <lookups>20</lookups> <failures>0</failures> <skipped>0</skipped> <status>1</status> <statusStr>in progress</statusStr> <runtime>51.00</runtime> </node> <aggregate> <files>4</files> <size>2097152000</size> <lookups>20</lookups> <failures>0</failures> <skipped>0</skipped> <status>1</status> <statusStr>in progress</statusStr> <runtime>51.00</runtime> </aggregate> </volRemoveBrick> </cliOutput> Doc text is edited. Please sign off to be included in Known Issues. Looks good Could be an issue with the query formed. Need to check. Thank you for your report. This bug is filed against a component for which no further new development is being undertaken |