Bug 1100219 - [RHSC] Node moved to Non-operational state when the cluster was imported to RHSC
Summary: [RHSC] Node moved to Non-operational state when the cluster was imported to RHSC
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Kanagaraj
QA Contact: Shruti Sampat
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-22 10:03 UTC by Shruti Sampat
Modified: 2016-04-18 10:06 UTC (History)
9 users (show)

Fixed In Version: rhsc-3.0.0-0.6.master.el6_5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-22 19:10:01 UTC
Embargoed:


Attachments (Terms of Use)
engine logs (248.30 KB, text/x-log)
2014-05-22 10:03 UTC, Shruti Sampat
no flags Details
host-deploy logs (220.55 KB, text/x-log)
2014-05-22 10:07 UTC, Shruti Sampat
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1277 0 normal SHIPPED_LIVE Red Hat Storage Console 3.0 enhancement and bug fix update 2014-09-22 23:06:30 UTC

Description Shruti Sampat 2014-05-22 10:03:12 UTC
Created attachment 898316 [details]
engine logs

Description of problem:
------------------------

Created a cluster of RHS 3.0 nodes, and imported the cluster via RHSC. The nodes moved to Non-operation state after a couple of minutes.

The engine logs show that there was a null pointer exception. See attached logs.

Version-Release number of selected component (if applicable):
rhsc-3.0.0-0.5.master.el6_5.noarch

On the RHS nodes -

[root@rhs ~]# rpm -qa|grep vdsm
vdsm-python-4.14.5-21.git7a3d0f0.el6rhs.x86_64
vdsm-python-zombiereaper-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-xmlrpc-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-cli-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-4.14.5-21.git7a3d0f0.el6rhs.x86_64
vdsm-reg-4.14.5-21.git7a3d0f0.el6rhs.noarch
vdsm-gluster-4.14.5-21.git7a3d0f0.el6rhs.noarch

[root@rhs ~]# rpm -qa|grep glusterfs
glusterfs-libs-3.6.0.5-1.el6rhs.x86_64
glusterfs-cli-3.6.0.5-1.el6rhs.x86_64
glusterfs-3.6.0.5-1.el6rhs.x86_64
glusterfs-api-3.6.0.5-1.el6rhs.x86_64
glusterfs-server-3.6.0.5-1.el6rhs.x86_64
glusterfs-rdma-3.6.0.5-1.el6rhs.x86_64
samba-glusterfs-3.6.9-168.1.el6rhs.x86_64
glusterfs-fuse-3.6.0.5-1.el6rhs.x86_64
glusterfs-geo-replication-3.6.0.5-1.el6rhs.x86_64

How reproducible:
Saw it once.

Steps to Reproduce:
1. Create a cluster of RHS 3.0 nodes and import the cluster via RHSC.

Actual results:
The nodes move to Non-operational state.

Expected results:
The nodes are expected to come up after bootstrapping.

Additional info:

Comment 1 Shruti Sampat 2014-05-22 10:07:32 UTC
Created attachment 898317 [details]
host-deploy logs

Comment 2 Kanagaraj 2014-05-22 17:41:50 UTC
RHEV-M bug 1096715

Comment 3 Shruti Sampat 2014-05-29 06:35:57 UTC
Verified as fixed in rhsc-3.0.0-0.6.master.el6_5.noarch

Comment 5 errata-xmlrpc 2014-09-22 19:10:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1277.html


Note You need to log in before you can comment on or make changes to this bug.