| Summary: | RHS-C: vdsm exception occurred during glusterVolumeRemoveBrickStatus | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Prasanth <pprakash> |
| Component: | rhsc | Assignee: | Timothy Asir <tjeyasin> |
| Status: | CLOSED ERRATA | QA Contact: | Prasanth <pprakash> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 2.1 | CC: | dpati, dtsang, hchiramm, kmayilsa, knarra, mmahoney, pprakash, rhs-bugs, ssampat, tjeyasin |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 2.1.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | CB8 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-02-25 07:53:10 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
sosreports of engine and 2 nodes can be downloaded from: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rhsc/1020159/ This happens due to the engine is using vds_name+brick_dir to uniquely identify a bricks instead of using host_name+brick_dir The above error message is seen the vdsm.log due to engine in not passing any brick(s) to the removeBricksStatus verb. Now the engine will send the host name along with brick details. (In reply to Timothy Asir from comment #4) > Now the engine will send the host name along with brick details. Tim, can you fill in the Fixed In Version of this bug so that I can verify it accordingly? (In reply to Timothy Asir from comment #4) > Now the engine will send the host name along with brick details. Can you please add the gerrit id to the external trackers url? so that it can be better tracked. Verified as fixed in cb8 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |
Description of problem: vdsm exception occured during glusterVolumeRemoveBrickStatus ---------- Thread-272::DEBUG::2013-10-17 11:19:47,485::BindingXMLRPC::974::vds::(wrapper) client [10.70.36.79]::call volumeRemoveBrickStatus with ('vol1', []) {} Thread-272::ERROR::2013-10-17 11:19:47,614::BindingXMLRPC::990::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 979, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 186, in volumeRemoveBrickStatus replicaCount) File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeRemoveBrickStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... {start|stop|status|commit|force} return code: 1 ---------- Version-Release number of selected component (if applicable): [root@vm07 /]# rpm -qa |grep rhsc rhsc-restapi-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-lib-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-cli-2.1.0.0-0.bb3a.el6rhs.noarch rhsc-webadmin-portal-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-sdk-2.1.0.0-0.bb3a.el6rhs.noarch rhsc-branding-rhs-3.3.0-1.0.master.201309200500.fc18.noarch rhsc-backend-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-tools-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-dbscripts-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-setup-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-2.1.2-0.0.scratch.beta1.el6_4.noarch rhsc-log-collector-2.1-0.1.el6rhs.noarch [root@vm12 /]# rpm -qa |grep vdsm vdsm-4.13.0-17.gitdbbbacd.el6_4.x86_64 vdsm-python-4.13.0-17.gitdbbbacd.el6_4.x86_64 vdsm-python-cpopen-4.13.0-17.gitdbbbacd.el6_4.x86_64 vdsm-xmlrpc-4.13.0-17.gitdbbbacd.el6_4.noarch vdsm-cli-4.13.0-17.gitdbbbacd.el6_4.noarch vdsm-gluster-4.13.0-17.gitdbbbacd.el6_4.noarch vdsm-reg-4.13.0-17.gitdbbbacd.el6_4.noarch How reproducible: Always Steps to Reproduce: Pre-requisite: Add 2 RHS nodes using a valid Hostname (make sure to give a different name for these nodes) 1. Select a brick and click on Remove and confirm the operation 2. In the activities column, click on "Status" 3. Status doesn't open up and the traceback is seen in the vdsm logs. Actual results: glusterVolumeRemoveBrickStatus fails Expected results: glusterVolumeRemoveBrickStatus shouldn't fail Additional info: sosreports from the engine and the nodes will be attached soon.