Bug 1020772 - RHS-C: Incorrect error message during volume create if one of the selected brick is already part of a volume which doesn't exists
Summary: RHS-C: Incorrect error message during volume create if one of the selected br...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Shubhendu Tripathi
QA Contact: RHS-C QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-18 09:22 UTC by Prasanth
Modified: 2015-12-03 17:12 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:12:58 UTC
Target Upstream Version:


Attachments (Terms of Use)
screenshot of error (106.92 KB, image/png)
2013-10-18 09:22 UTC, Prasanth
no flags Details

Description Prasanth 2013-10-18 09:22:19 UTC
Created attachment 813661 [details]
screenshot of error

Description of problem:

Invalid error message is seen during volume create if one of the selected brick is already part of a volume, which was present earlier but got deleted later. See below:

---------
Error while executing action Create Gluster Volume: Volume create failed
error: Staging failed on 10_70_36_82_ Error: /home/1 is already part of a volume
Staging failed on vm11_lab_eng_blr_redhat_com_ Error: /home/1 is already part of a volume
return code: -1
---------

Sometimes, I see the following message, but if you press OK for a couple of times, you will again see the above invalid message.

---------
Error while executing action Create Gluster Volume: Volume create failed
error: /home/a is already part of a volume
return code: -1
---------


Version-Release number of selected component (if applicable):

[root@vm07 /]# rpm -qa |grep rhsc
rhsc-restapi-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-lib-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-cli-2.1.0.0-0.bb3a.el6rhs.noarch
rhsc-webadmin-portal-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-sdk-2.1.0.0-0.bb3a.el6rhs.noarch
rhsc-branding-rhs-3.3.0-1.0.master.201309200500.fc18.noarch
rhsc-backend-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-tools-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-dbscripts-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-setup-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-log-collector-2.1-0.1.el6rhs.noarch

[root@vm12 /]#  rpm -qa |grep vdsm
vdsm-4.13.0-17.gitdbbbacd.el6_4.x86_64
vdsm-python-4.13.0-17.gitdbbbacd.el6_4.x86_64
vdsm-python-cpopen-4.13.0-17.gitdbbbacd.el6_4.x86_64
vdsm-xmlrpc-4.13.0-17.gitdbbbacd.el6_4.noarch
vdsm-cli-4.13.0-17.gitdbbbacd.el6_4.noarch
vdsm-gluster-4.13.0-17.gitdbbbacd.el6_4.noarch
vdsm-reg-4.13.0-17.gitdbbbacd.el6_4.noarch

[root@vm12 /]# rpm -qa |grep glusterfs
glusterfs-server-3.4.0.34.1u2rhs-1.el6rhs.x86_64
glusterfs-libs-3.4.0.34.1u2rhs-1.el6rhs.x86_64
glusterfs-3.4.0.34.1u2rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.34.1u2rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.34.1u2rhs-1.el6rhs.x86_64
samba-glusterfs-3.6.9-160.3.el6rhs.x86_64
glusterfs-rdma-3.4.0.34.1u2rhs-1.el6rhs.x86_64
glusterfs-api-3.4.0.34.1u2rhs-1.el6rhs.x86_64


How reproducible: Always


Steps to Reproduce:
1. Create a distribute volume, say vol1, with 2 bricks, say /export1 in server1 and /export1 in server2
2. Start the volume, stop and then delete the volume.
3. Create a distribute volume, say vol2, with 2 bricks using the same bricks used above (/export1 in server1 and /export1 in server2)
4. Click on OK

Actual results: Error message is not proper. 


Expected results: Error message should be proper


Additional info: Screenshot attached

Comment 2 Dusmant 2013-10-24 07:27:18 UTC
Prasanth,
    If you do it through CLI, then what's the outcome? What's the error message given by Gluster? And everytime the message from gluster in this situation is consistent?

-Dusmant

Comment 3 Prasanth 2013-10-25 07:10:49 UTC
(In reply to Dusmant from comment #2)
> Prasanth,
>     If you do it through CLI, then what's the outcome? What's the error
> message given by Gluster? And everytime the message from gluster in this
> situation is consistent?
> 
> -Dusmant

Dusmant,

If I do it from the CLI, following is the outcome and the error message:

--------
[root@vm10 ~]# gluster volume create vol6 vm10.lab.eng.blr.redhat.com:/home/5 vm11.lab.eng.blr.redhat.com:/home/5 force
volume create: vol6: failed: /home/5 is already part of a volume
[root@vm10 ~]# 
[root@vm10 ~]# gluster volume create vol6 vm10.lab.eng.blr.redhat.com:/home/5 vm11.lab.eng.blr.redhat.com:/home/5 force
volume create: vol6: failed: /home/5 is already part of a volume
[root@vm10 ~]# 
[root@vm10 ~]# gluster volume create vol6 vm10.lab.eng.blr.redhat.com:/home/5 vm11.lab.eng.blr.redhat.com:/home/5 force
volume create: vol6: failed: /home/5 is already part of a volume
[root@vm10 ~]# gluster volume create vol6 vm10.lab.eng.blr.redhat.com:/home/5 vm11.lab.eng.blr.redhat.com:/home/5 force
volume create: vol6: failed: /home/5 is already part of a volume
[root@vm10 ~]# gluster volume create vol6 vm10.lab.eng.blr.redhat.com:/home/5 vm11.lab.eng.blr.redhat.com:/home/5 force
volume create: vol6: failed: /home/5 is already part of a volume
[root@vm10 ~]# 
--------

So it shows that everytime the message from gluster in this situation is consistent.

Comment 4 Dusmant 2013-10-25 12:08:53 UTC
Aravinda will take a look at this and then we will decide whether it should be taken in Corbett or not.

Comment 5 Dusmant 2013-12-06 12:25:59 UTC
RHSC is returning the error message returned from gluster as is. What might be happening, is gluster in different node is returning different error messages. 

And it's not impacting much. I don't think, it's a severe bug as such. To fix this, it will require fix from Gluster.

We need to take it out of Corbett

Comment 6 Vivek Agarwal 2015-12-03 17:12:58 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.