Description of problem: Adding a RHS server, based on RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso build, as a host to a Cluster with Gluster feature checked, in RHEV-M, fails with the error message: " Failed to install Host xxxx Step:RHN_REGISTRATION; Details: Unable to fetch vdsm package. Please check if host is registered to RHN, Satellite or other yum repository. " The host is registered to RHN, and already contains the vdsm package. # rhn-channel -l rhel-x86_64-server-6.2.z # rpm -qa | grep vdsm vdsm-python-4.9.6-17.el6rhs.x86_64 vdsm-4.9.6-17.el6rhs.x86_64 vdsm-cli-4.9.6-17.el6rhs.noarch vdsm-gluster-4.9.6-17.el6rhs.noarch vdsm-reg-4.9.6-17.el6rhs.noarch # rpm -qa | grep gluster glusterfs-3.3.0.5rhs-40.el6rhs.x86_64 glusterfs-server-3.3.0.5rhs-40.el6rhs.x86_64 gluster-swift-account-1.4.8-4.el6.noarch gluster-swift-1.4.8-4.el6.noarch glusterfs-fuse-3.3.0.5rhs-40.el6rhs.x86_64 vdsm-gluster-4.9.6-17.el6rhs.noarch gluster-swift-plugin-1.0-5.noarch gluster-swift-object-1.4.8-4.el6.noarch gluster-swift-container-1.4.8-4.el6.noarch org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.2-1.noarch glusterfs-rdma-3.3.0.5rhs-40.el6rhs.x86_64 gluster-swift-proxy-1.4.8-4.el6.noarch glusterfs-geo-replication-3.3.0.5rhs-40.el6rhs.x86_64 gluster-swift-doc-1.4.8-4.el6.noarch Version-Release number of selected component (if applicable): RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso How reproducible: Steps to Reproduce: 1. Install an RHS server with RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso and register system to RHN, and update. 2. Create a Cluster with Gluster feature in RHEV-M, and try to add the RHS server as a host to this cluster. 3. Actual results: The add Host fails for the RHS server. Even though the vdsm package is already available, and the Host is registered to RHN, the failure message reports "Unable to fetch vdsm package. Please check if host is registered to RHN, Satellite or other yum repository." Expected results: The add Host of the RHS server must be successfully completed Additional info: The BZ given below should have fixed this issue, with the fix made available in the RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso - Bug 874501 - Adding Beta RHS-2.0-20121031.0-RHS-x86_64 host fails in gluster cluster using RHEV-M
Additional information: I have confirmed that if a local yum repository with the vdsm packages are set-up, then the add Host completes successfully. However the same vdsm packages were already installed in the system, and this should have been detected earlier as well. It appears that the check process for installed and available packages and for system registration status is in-efficient and faulty.
Bala, can you take a look at this?
Could you tell me RHEV-M version you are using?
(In reply to comment #5) > Could you tell me RHEV-M version you are using? The 'About' button on RHEV-M shows Red Hat Enterprise Virtualization Manager Version: 3.1.0-32.el6ev Some more information from the system: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) # rpm -qa | grep -i rhev rhevm-restapi-3.1.0-32.el6ev.noarch rhevm-3.1.0-32.el6ev.noarch rhevm-sdk-3.1.0.16-1.el6ev.noarch rhevm-config-3.1.0-32.el6ev.noarch rhevm-dbscripts-3.1.0-32.el6ev.noarch rhevm-log-collector-3.1.0-9.el6ev.noarch rhevm-image-uploader-3.1.0-7.el6ev.noarch rhevm-backend-3.1.0-32.el6ev.noarch rhevm-spice-client-x64-cab-3.1-8.el6.noarch rhevm-doc-3.1.0-21.el6eng.noarch rhevm-notification-service-3.1.0-32.el6ev.noarch rhev-guest-tools-iso-3.1-9.noarch rhevm-iso-uploader-3.1.0-8.el6ev.noarch rhevm-genericapi-3.1.0-32.el6ev.noarch rhevm-spice-client-x86-cab-3.1-8.el6.noarch rhevm-setup-3.1.0-32.el6ev.noarch rhevm-cli-3.1.0.17-1.el6ev.noarch rhevm-userportal-3.1.0-32.el6ev.noarch rhevm-webadmin-portal-3.1.0-32.el6ev.noarch rhevm-tools-common-3.1.0-32.el6ev.noarch
'Add host' involving bootstrap process is served from RHEV-M and no code controls this in RHS. Assigning to Alon, Bootstrap maintainer of RHEV-M
(In reply to comment #7) > > 'Add host' involving bootstrap process is served from RHEV-M and no code > controls this in RHS. Assigning to Alon, Bootstrap maintainer of RHEV-M This behavior is legacy and by design. host deploy will always look at the channel and not query the system. In the ovirt-3.2 using ovirt-host-deploy there is an option to perform offline mode, maybe it will be better for RHS.
On 02/21/2013 11:35 AM, Gowrishankar Rajaiyan wrote: > On 02/21/2013 11:28 AM, Scott Haines wrote: >> On 02/20/2013 09:46 PM, Shireesh Anjal wrote: >>> As mentioned by Alon on the BZ, this behavior is by design. The system >>> must either be registered to the channel, or have a local repository >>> containing the required packages. It is *not* a blocker as I believe we >>> expect customers to register to the channel if in production. >>> >>> I remember that in beta2, we did not want the customer to have the >>> hassle of registering to channel - install and ready to go. In that >>> release, we had made sure that the ISO contains a local repository with >>> required packages and hence this works. >> >> So, close, not a bug? > > No, please put it ON_QA.
Verified as fixed in RHS-2.0-20130219.3-RHS-x86_64-DVD1.iso Adding a RHS server, based on RHS-2.0-20130219.3-RHS-x86_64-DVD1.iso build, as a host to a Cluster with Gluster feature checked, in RHEV-M, now succeeds without any error.
Hi sabose, not sure what I can do farther.