Bug 1100219

Summary: [RHSC] Node moved to Non-operational state when the cluster was imported to RHSC
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Shruti Sampat <ssampat>
Component: rhscAssignee: Kanagaraj <kmayilsa>
Status: CLOSED ERRATA QA Contact: Shruti Sampat <ssampat>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.0CC: asrivast, dpati, kmayilsa, nlevinki, nsathyan, rhs-bugs, rhsc-qe-bugs, sankarshan, sgraf
Target Milestone: ---Keywords: Regression
Target Release: RHGS 3.0.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: rhsc-3.0.0-0.6.master.el6_5 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-09-22 19:10:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine logs
none
host-deploy logs none

Description Shruti Sampat 2014-05-22 10:03:12 UTC
Created attachment 898316 [details]
engine logs

Description of problem:
------------------------

Created a cluster of RHS 3.0 nodes, and imported the cluster via RHSC. The nodes moved to Non-operation state after a couple of minutes.

The engine logs show that there was a null pointer exception. See attached logs.

Version-Release number of selected component (if applicable):
rhsc-3.0.0-0.5.master.el6_5.noarch

On the RHS nodes -

[root@rhs ~]# rpm -qa|grep vdsm
vdsm-python-4.14.5-21.git7a3d0f0.el6rhs.x86_64
vdsm-python-zombiereaper-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-xmlrpc-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-cli-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-4.14.5-21.git7a3d0f0.el6rhs.x86_64
vdsm-reg-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-gluster-4.14.5-21.git7a3d0f0.el6rhs.noarch

[root@rhs ~]# rpm -qa|grep glusterfs
glusterfs-libs-3.6.0.5-1.el6rhs.x86_64
glusterfs-cli-3.6.0.5-1.el6rhs.x86_64
glusterfs-3.6.0.5-1.el6rhs.x86_64
glusterfs-api-3.6.0.5-1.el6rhs.x86_64
glusterfs-server-3.6.0.5-1.el6rhs.x86_64
glusterfs-rdma-3.6.0.5-1.el6rhs.x86_64
samba-glusterfs-3.6.9-168.1.el6rhs.x86_64
glusterfs-fuse-3.6.0.5-1.el6rhs.x86_64
glusterfs-geo-replication-3.6.0.5-1.el6rhs.x86_64

How reproducible:
Saw it once.

Steps to Reproduce:
1. Create a cluster of RHS 3.0 nodes and import the cluster via RHSC.

Actual results:
The nodes move to Non-operational state.

Expected results:
The nodes are expected to come up after bootstrapping.

Additional info:

Comment 1 Shruti Sampat 2014-05-22 10:07:32 UTC
Created attachment 898317 [details]
host-deploy logs

Comment 2 Kanagaraj 2014-05-22 17:41:50 UTC
RHEV-M bug 1096715

Comment 3 Shruti Sampat 2014-05-29 06:35:57 UTC
Verified as fixed in rhsc-3.0.0-0.6.master.el6_5.noarch

Comment 5 errata-xmlrpc 2014-09-22 19:10:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1277.html