This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 902811 - Adding RHS-2.0-20130115.0-RHS-x86_64 based RHS Host, to a Cluster with Gluster feature checked, in RHEV-M, fails
Adding RHS-2.0-20130115.0-RHS-x86_64 based RHS Host, to a Cluster with Gluste...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: build (Show other bugs)
2.0
All All
high Severity high
: ---
: ---
Assigned To: Sahina Bose
Prasanth
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-01-22 08:04 EST by Rejy M Cyriac
Modified: 2015-08-10 03:47 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-08-10 03:47:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Rejy M Cyriac 2013-01-22 08:04:54 EST
Description of problem:
Adding a RHS server, based on RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso build, as a host to a Cluster with Gluster feature checked, in RHEV-M, fails with the error message:

"
Failed to install Host xxxx Step:RHN_REGISTRATION; Details: Unable to fetch vdsm package. Please check if host is registered to RHN, Satellite or other yum repository.
"

The host is registered to RHN, and already contains the vdsm package.

# rhn-channel -l
rhel-x86_64-server-6.2.z

# rpm -qa | grep vdsm
vdsm-python-4.9.6-17.el6rhs.x86_64
vdsm-4.9.6-17.el6rhs.x86_64
vdsm-cli-4.9.6-17.el6rhs.noarch
vdsm-gluster-4.9.6-17.el6rhs.noarch
vdsm-reg-4.9.6-17.el6rhs.noarch

# rpm -qa | grep gluster
glusterfs-3.3.0.5rhs-40.el6rhs.x86_64
glusterfs-server-3.3.0.5rhs-40.el6rhs.x86_64
gluster-swift-account-1.4.8-4.el6.noarch
gluster-swift-1.4.8-4.el6.noarch
glusterfs-fuse-3.3.0.5rhs-40.el6rhs.x86_64
vdsm-gluster-4.9.6-17.el6rhs.noarch
gluster-swift-plugin-1.0-5.noarch
gluster-swift-object-1.4.8-4.el6.noarch
gluster-swift-container-1.4.8-4.el6.noarch
org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.2-1.noarch
glusterfs-rdma-3.3.0.5rhs-40.el6rhs.x86_64
gluster-swift-proxy-1.4.8-4.el6.noarch
glusterfs-geo-replication-3.3.0.5rhs-40.el6rhs.x86_64
gluster-swift-doc-1.4.8-4.el6.noarch


Version-Release number of selected component (if applicable):
RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso

How reproducible:


Steps to Reproduce:
1. Install an RHS server with RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso and register system to RHN, and update.
2. Create a Cluster with Gluster feature in RHEV-M, and try to add the RHS server as a host to this cluster.
3.
  
Actual results:
The add Host fails for the RHS server. Even though the vdsm package is already available, and the Host is registered to RHN, the failure message reports "Unable to fetch vdsm package. Please check if host is registered to RHN, Satellite or other yum repository."

Expected results:
The add Host of the RHS server must be successfully completed

Additional info:
The BZ given below should have fixed this issue, with the fix made available in the RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso -

Bug 874501 - Adding Beta RHS-2.0-20121031.0-RHS-x86_64 host fails in gluster cluster using RHEV-M
Comment 1 Rejy M Cyriac 2013-01-22 08:30:48 EST
Additional information:

I have confirmed that if a local yum repository with the vdsm packages are set-up, then the add Host completes successfully. However the same vdsm packages were already installed in the system, and this should have been detected earlier as well. It appears that the check process for installed and available packages and for system registration status is in-efficient and faulty.
Comment 2 Vidya Sakar 2013-01-23 03:41:58 EST
Bala, can you take a look at this?
Comment 5 Bala.FA 2013-01-25 02:39:16 EST
Could you tell me RHEV-M version you are using?
Comment 6 Rejy M Cyriac 2013-01-25 02:45:35 EST
(In reply to comment #5)
> Could you tell me RHEV-M version you are using?

The 'About' button on RHEV-M shows

 Red Hat Enterprise Virtualization Manager Version: 3.1.0-32.el6ev 

Some more information from the system:

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.3 (Santiago)

# rpm -qa | grep -i rhev
rhevm-restapi-3.1.0-32.el6ev.noarch
rhevm-3.1.0-32.el6ev.noarch
rhevm-sdk-3.1.0.16-1.el6ev.noarch
rhevm-config-3.1.0-32.el6ev.noarch
rhevm-dbscripts-3.1.0-32.el6ev.noarch
rhevm-log-collector-3.1.0-9.el6ev.noarch
rhevm-image-uploader-3.1.0-7.el6ev.noarch
rhevm-backend-3.1.0-32.el6ev.noarch
rhevm-spice-client-x64-cab-3.1-8.el6.noarch
rhevm-doc-3.1.0-21.el6eng.noarch
rhevm-notification-service-3.1.0-32.el6ev.noarch
rhev-guest-tools-iso-3.1-9.noarch
rhevm-iso-uploader-3.1.0-8.el6ev.noarch
rhevm-genericapi-3.1.0-32.el6ev.noarch
rhevm-spice-client-x86-cab-3.1-8.el6.noarch
rhevm-setup-3.1.0-32.el6ev.noarch
rhevm-cli-3.1.0.17-1.el6ev.noarch
rhevm-userportal-3.1.0-32.el6ev.noarch
rhevm-webadmin-portal-3.1.0-32.el6ev.noarch
rhevm-tools-common-3.1.0-32.el6ev.noarch
Comment 7 Bala.FA 2013-01-25 04:57:09 EST

'Add host' involving bootstrap process is served from RHEV-M and no code controls this in RHS.  Assigning to Alon, Bootstrap maintainer of RHEV-M
Comment 8 Alon Bar-Lev 2013-02-12 15:28:48 EST
(In reply to comment #7)
> 
> 'Add host' involving bootstrap process is served from RHEV-M and no code
> controls this in RHS.  Assigning to Alon, Bootstrap maintainer of RHEV-M

This behavior is legacy and by design. host deploy will always look at the channel and not query the system.

In the ovirt-3.2 using ovirt-host-deploy there is an option to perform offline mode, maybe it will be better for RHS.
Comment 9 Shireesh 2013-02-21 01:10:09 EST
On 02/21/2013 11:35 AM, Gowrishankar Rajaiyan wrote:
> On 02/21/2013 11:28 AM, Scott Haines wrote:
>> On 02/20/2013 09:46 PM, Shireesh Anjal wrote:
>>> As mentioned by Alon on the BZ, this behavior is by design. The system
>>> must either be registered to the channel, or have a local repository
>>> containing the required packages. It is *not* a blocker as I believe we
>>> expect customers to register to the channel if in production.
>>>
>>> I remember that in beta2, we did not want the customer to have the
>>> hassle of registering to channel - install and ready to go. In that
>>> release, we had made sure that the ISO contains a local repository with
>>> required packages and hence this works. 
>>
>> So, close, not a bug? 
>
> No, please put it ON_QA.
Comment 10 Prasanth 2013-03-04 05:46:15 EST
Verified as fixed in RHS-2.0-20130219.3-RHS-x86_64-DVD1.iso

Adding a RHS server, based on RHS-2.0-20130219.3-RHS-x86_64-DVD1.iso build, as a host to a Cluster with Gluster feature checked, in RHEV-M, now succeeds without any error.
Comment 11 Alon Bar-Lev 2015-02-11 15:47:06 EST
Hi sabose, not sure what I can do farther.

Note You need to log in before you can comment on or make changes to this bug.