Bug 924193

Summary: [RHS-C] Unable to add the latest RHS 2.0+ host in RHS-C due to compatibility issues
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Prasanth <pprakash>
Component: vdsmAssignee: Bala.FA <barumuga>
Status: CLOSED ERRATA QA Contact: Prasanth <pprakash>
Severity: urgent Docs Contact:
Priority: high    
Version: 2.0CC: dpati, dtsang, grajaiya, knarra, mmahoney, pprakash, rhs-bugs, sabose, shaines, ssampat
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: vdsm-4.9.6-21.el6rhs.x86_64 Doc Type: Bug Fix
Doc Text:
Cause: Vdsm in Red Hat Storage 2.0 U4 node reported cluster compatibility level 3.1, 3.2 while the Tech Preview Console required compatibility level 2.0 Consequence: Could not add Red Hat Storage 2.0 U4 node to Tech Preview Console Fix: Modified vdsm in Red Hat Storage 2.0 U4 node to support cluster compatibility level 2.0 as well Result: You can now add Red Hat Storage 2.0 U4 node to Tech Preview Console
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-15 21:51:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine logs
none
vdsm.log
none
screenshot none

Description Prasanth 2013-03-21 10:58:41 UTC
Description of problem:

Unable to add the latest RHS 2.0+ host in RHS-C due to compatibility issues


Version-Release number of selected component (if applicable):

From Engine:
rhsc-2.0.techpreview1-3.el6rhs.noarch
vdsm-bootstrap-4.9.6-14.el6rhs.noarch

From Node:
Latest Anshi ISO: http://download.devel.redhat.com/composes/candidates/RHS-2.0-20130317.0/2/RHS/x86_64/iso/RHS-2.0-20130317.0-RHS-x86_64-DVD1.iso

# rpm -qa |grep vdsm
vdsm-python-4.9.6-19.el6rhs.x86_64
vdsm-gluster-4.9.6-19.el6rhs.noarch
vdsm-cli-4.9.6-19.el6rhs.noarch
vdsm-4.9.6-19.el6rhs.x86_64
vdsm-reg-4.9.6-19.el6rhs.noarch

glusterfs-fuse-3.3.0.6rhs-4.el6rhs.x86_64
glusterfs-3.3.0.6rhs-4.el6rhs.x86_64
org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.2-1.noarch
glusterfs-server-3.3.0.6rhs-4.el6rhs.x86_64
glusterfs-rdma-3.3.0.6rhs-4.el6rhs.x86_64
glusterfs-geo-replication-3.3.0.6rhs-4.el6rhs.x86_64


How reproducible: 100%


Steps to Reproduce:
1. Install RHS-C
2. Create a New Cluster
3. Add a new server (which is a RHS 2.0+ ISO installed node)
  
Actual results: Add server will fail and the server status will set to "Non Operational" with the following error in the "Events":

"Host <hostname> is compatible with versions (3.0,3.1) and cannot join Cluster newcluster which is set to version 2.0."


Expected results: Add server should be successful with the latest RHS 2.0+ ISO


Additional info: Logs attached

Comment 2 Prasanth 2013-03-21 11:01:38 UTC
Created attachment 713769 [details]
engine logs

Comment 3 Prasanth 2013-03-21 11:02:52 UTC
Created attachment 713772 [details]
vdsm.log

Comment 4 Prasanth 2013-03-21 11:07:29 UTC
Created attachment 713774 [details]
screenshot

Comment 5 Sahina Bose 2013-03-21 11:54:30 UTC
The issue is because vdsm reports Cluster level 3.0 and 3.1 in the Anshi RHS node.

The RHSC tech preview expects a cluster level 2.0.
Workaround:
 modify following line in /usr/share/vdsm/dsaversion.py and restart vdsm.

 'clusterLevels': ['3.0', '3.1'] change to
 'clusterLevels': ['2.0','3.0', '3.1']

Comment 6 Bala.FA 2013-03-23 04:54:06 UTC
Is this bug limiting to comapatiblity version only.  I remember rhsc tech-preview uses 'ssl=False'.  Is that change required?

Comment 7 Bala.FA 2013-03-23 06:27:52 UTC
Please open a new bug if ssl=False is needed

Comment 8 Prasanth 2013-03-26 13:12:17 UTC
Verified as fixed in vdsm-4.9.6-21.el6rhs.x86_64

Comment 10 errata-xmlrpc 2013-07-15 21:51:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1064.html