This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1013718 - RHS-C: cb2 build - Adding a 2nd server to the Cluster from UI is failing
RHS-C: cb2 build - Adding a 2nd server to the Cluster from UI is failing
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
medium Severity medium
: ---
: RHGS 2.1.2
Assigned To: Timothy Asir
Prasanth
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-30 12:10 EDT by Prasanth
Modified: 2015-05-15 14:15 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-10-17 06:37:33 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
engine logs (478.66 KB, application/gzip)
2013-09-30 12:12 EDT, Prasanth
no flags Details
vdsm.log from server1 (455.74 KB, application/gzip)
2013-09-30 12:13 EDT, Prasanth
no flags Details
vdsm.log from server2 (102.51 KB, application/gzip)
2013-09-30 12:13 EDT, Prasanth
no flags Details

  None (edit)
Description Prasanth 2013-09-30 12:10:38 EDT
Description of problem:

Adding a 2nd server to the Cluster from UI is failing

----------------
Sep 30 17:40:03 vm10 vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012  File "/usr/share/vdsm/BindingXML
RPC.py", line 979, in wrapper#012    res = f(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper#012    r
v = func(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 211, in hostAdd#012    self.svdsmProxy.glusterPeerProbe(hos
tName)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012    return callMethod()#012  File "/usr/share/vdsm/supervdsm.
py", line 48, in <lambda>#012    **kwargs)#012  File "<string>", line 2, in glusterPeerProbe#012  File "/usr/lib64/python2.6/multiproc
essing/managers.py", line 740, in _callmethod#012    raise convert_to_error(kind, result)#012GlusterHostAddFailedException: Add host f
ailed#012error: Probe returned with unknown errno 107#012return code: -1
Sep 30 17:40:06 vm10 rhsmd: This system is registered to RHN Classic.
Sep 30 17:45:03 vm10 vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012  File "/usr/share/vdsm/BindingXML
RPC.py", line 979, in wrapper#012    res = f(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper#012    r
v = func(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 211, in hostAdd#012    self.svdsmProxy.glusterPeerProbe(hos
tName)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012    return callMethod()#012  File "/usr/share/vdsm/supervdsm.
py", line 48, in <lambda>#012    **kwargs)#012  File "<string>", line 2, in glusterPeerProbe#012  File "/usr/lib64/python2.6/multiproc
essing/managers.py", line 740, in _callmethod#012    raise convert_to_error(kind, result)#012GlusterHostAddFailedException: Add host f
ailed#012error: Probe returned with unknown errno 107#012return code: -1
Sep 30 17:50:02 vm10 vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012  File "/usr/share/vdsm/BindingXML
RPC.py", line 979, in wrapper#012    res = f(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper#012    r
v = func(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 211, in hostAdd#012    self.svdsmProxy.glusterPeerProbe(hos
tName)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012    return callMethod()#012  File "/usr/share/vdsm/supervdsm.
py", line 48, in <lambda>#012    **kwargs)#012  File "<string>", line 2, in glusterPeerProbe#012  File "/usr/lib64/python2.6/multiproc
essing/managers.py", line 740, in _callmethod#012    raise convert_to_error(kind, result)#012GlusterHostAddFailedException: Add host f
ailed#012error: Probe returned with unknown errno 107#012return code: -1
Sep 30 17:55:02 vm10 vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012  File "/usr/share/vdsm/BindingXML
RPC.py", line 979, in wrapper#012    res = f(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper#012    r
v = func(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line 211, in hostAdd#012    self.svdsmProxy.glusterPeerProbe(hos
tName)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012    return callMethod()#012  File "/usr/share/vdsm/supervdsm.
py", line 48, in <lambda>#012    **kwargs)#012  File "<string>", line 2, in glusterPeerProbe#012  File "/usr/lib64/python2.6/multiproc
essing/managers.py", line 740, in _callmethod#012    raise convert_to_error(kind, result)#012GlusterHostAddFailedException: Add host f
ailed#012error: Probe returned with unknown errno 107#012return code: -1
----------------


Version-Release number of selected component (if applicable): 	
	
----------------	
Red Hat Storage Console Version: 2.1.1-0.0.2.master.el6ev 

vdsm-python-4.12.0-143.gitee97932.el6.x86_64
vdsm-python-cpopen-4.12.0-143.gitee97932.el6.x86_64
vdsm-xmlrpc-4.12.0-143.gitee97932.el6.noarch
vdsm-4.12.0-143.gitee97932.el6.x86_64
vdsm-cli-4.12.0-143.gitee97932.el6.noarch
vdsm-gluster-4.12.0-143.gitee97932.el6.noarch


glusterfs-api-3.4.1-0.2.rc1.el6.x86_64
glusterfs-3.4.1-0.2.rc1.el6.x86_64
glusterfs-cli-3.4.1-0.2.rc1.el6.x86_64
glusterfs-server-3.4.1-0.2.rc1.el6.x86_64
glusterfs-libs-3.4.1-0.2.rc1.el6.x86_64
glusterfs-fuse-3.4.1-0.2.rc1.el6.x86_64
----------------


How reproducible: Always


Steps to Reproduce:
1. Install and setup engine and 2 nodes by following http://rhsm.pad.engineering.redhat.com/corbett-build-installation
2. Add the first server and make sure that it's 'UP'
3. Now add the second server and see the Events

Actual results: Add server of server2 is failing


Expected results: Add server should be successful always


Additional info: Engine and vdsm logs from 2 servers are attached
Comment 1 Prasanth 2013-09-30 12:12:05 EDT
Created attachment 805346 [details]
engine logs
Comment 2 Prasanth 2013-09-30 12:13:18 EDT
Created attachment 805347 [details]
vdsm.log from server1
Comment 3 Prasanth 2013-09-30 12:13:43 EDT
Created attachment 805348 [details]
vdsm.log from server2
Comment 5 Prasanth 2013-10-03 06:29:58 EDT
Bala, do you think if the cause of this bug is the same as that of Bug 1013611 ?
Comment 6 Timothy Asir 2013-10-10 05:53:06 EDT
VDSM throws 012GlusterHostAddFailedException error because 
during add server, the gluster cli peer probe command failed with an unknown error. That means we need to check glusterfs log also to find-out the actual cause of this issue. Could you please attach glusterfs log for further verification?
Comment 7 Prasanth 2013-10-15 05:42:40 EDT
(In reply to Timothy Asir from comment #6)
> VDSM throws 012GlusterHostAddFailedException error because 
> during add server, the gluster cli peer probe command failed with an unknown
> error. That means we need to check glusterfs log also to find-out the actual
> cause of this issue. Could you please attach glusterfs log for further
> verification?

Unfortunately, I don't have the setup with me to upload the glusterfs log. I'll make sure to upload the sosreports from all the servers from now on to avoid this situation! But I don't think it's actually a gluster specific issue as the servers itself couldn't communicate with each other immediately after the bootstrap. So all the gluster operations are bound to fail and that's what you might have probable seen in the logs! And that's the reason I suspect if this issue is related to Bug 1013611 by any chance?

That being said, I haven't seen this issue at all with the CB3 build. I was able to add more than 2 servers without any issue reported in this bug. So looks like this got fixed in CB3. Is that understanding correct?
Comment 8 Prasanth 2013-10-17 06:37:33 EDT
I tried to reproduce the issue in the latest cb4 build but could see that add server works as expected. So I assume that this might have caused in cb2 as we were using the upstream glusterfs versions. 

So closing this bug for now.

Note You need to log in before you can comment on or make changes to this bug.