Created attachment 898316 [details] engine logs Description of problem: ------------------------ Created a cluster of RHS 3.0 nodes, and imported the cluster via RHSC. The nodes moved to Non-operation state after a couple of minutes. The engine logs show that there was a null pointer exception. See attached logs. Version-Release number of selected component (if applicable): rhsc-3.0.0-0.5.master.el6_5.noarch On the RHS nodes - [root@rhs ~]# rpm -qa|grep vdsm vdsm-python-4.14.5-21.git7a3d0f0.el6rhs.x86_64 vdsm-python-zombiereaper-4.14.5-21.git7a3d0f0.el6rhs.noarch vdsm-xmlrpc-4.14.5-21.git7a3d0f0.el6rhs.noarch vdsm-cli-4.14.5-21.git7a3d0f0.el6rhs.noarch vdsm-4.14.5-21.git7a3d0f0.el6rhs.x86_64 vdsm-reg-4.14.5-21.git7a3d0f0.el6rhs.noarch vdsm-gluster-4.14.5-21.git7a3d0f0.el6rhs.noarch [root@rhs ~]# rpm -qa|grep glusterfs glusterfs-libs-3.6.0.5-1.el6rhs.x86_64 glusterfs-cli-3.6.0.5-1.el6rhs.x86_64 glusterfs-3.6.0.5-1.el6rhs.x86_64 glusterfs-api-3.6.0.5-1.el6rhs.x86_64 glusterfs-server-3.6.0.5-1.el6rhs.x86_64 glusterfs-rdma-3.6.0.5-1.el6rhs.x86_64 samba-glusterfs-3.6.9-168.1.el6rhs.x86_64 glusterfs-fuse-3.6.0.5-1.el6rhs.x86_64 glusterfs-geo-replication-3.6.0.5-1.el6rhs.x86_64 How reproducible: Saw it once. Steps to Reproduce: 1. Create a cluster of RHS 3.0 nodes and import the cluster via RHSC. Actual results: The nodes move to Non-operational state. Expected results: The nodes are expected to come up after bootstrapping. Additional info:
Created attachment 898317 [details] host-deploy logs
RHEV-M bug 1096715
Verified as fixed in rhsc-3.0.0-0.6.master.el6_5.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1277.html