Bug 214989 - Package download not working for Conga during cluster creation
Package download not working for Conga during cluster creation
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: conga (Show other bugs)
5.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Stanko Kupcevic
Corey Marthaler
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-11-10 09:41 EST by Len DiMaggio
Modified: 2009-04-16 18:35 EDT (History)
6 users (show)

See Also:
Fixed In Version: beta2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-12-22 21:29:16 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Len DiMaggio 2006-11-10 09:41:49 EST
Description of problem:

With the cluster packages available in a yum repo, selecting "Download packages
from RHN" fails - how should this be configured to work?

Version-Release number of selected component (if applicable):
ricci-0.8-23.el5
RHEL5-Server-20061102.2

How reproducible:
100%

Steps to Reproduce:
1. Create a new cluster - select "Download packages from RHN"
  
Actual results:

The ricci queue entry contains:

----------------------------------------------------------
<?xml version="1.0"?>
<batch batch_id="959650446" status="4">
        <module name="rpm" status="4">
                <response API_version="1.0" sequence="">
                        <function_response function_name="install">
                                <var mutable="false" name="success"
type="boolean" value="false"/>
                                <var mutable="false" name="error_code"
type="int" value="-1"/>
                                <var mutable="false" name="error_description"
type="string" value="packages of set Cluster Base not present in repository"/>
                        </function_response>
                </response>
        </module>
        <module name="reboot" status="5">
                <request API_version="1.0">
                        <function_call name="reboot_now"/>
                </request>
        </module>
        <module name="cluster" status="5">
                <request API_version="1.0">
                        <function_call name="set_cluster.conf">
                                <var mutable="false" name="propagate"
type="boolean" value="false"/>
                                <var mutable="false" name="cluster.conf" type="xml">
                                        <cluster alias="node2"
config_version="1" name="node2">
                                                <fence_daemon
post_fail_delay="0" post_join_delay="3"/>
                                                <clusternodes>
                                                        <clusternode
name="tng3-2" nodeid="1" votes="1"/>
                                                </clusternodes>
                                                <cman/>
                                                <fencedevices/>
                                                <rm/>
                                        </cluster>
                                </var>
                        </function_call>
                </request>
        </module>
        <module name="cluster" status="5">
                <request API_version="1.0">
                        <function_call name="start_node">
                                <var mutable="false" name="cluster_startup"
type="boolean" value="true"/>
                        </function_call>
                </request>
        </module>
</batch>
----------------------------------------------------------

Expected results:
Successful download

Additional info:

The cluster nodes have this configuration:

[root@tng3-2 ~]# yum list ricci
Loading "installonlyn" plugin
Setting up repositories
Reading repository metadata in from local files
Installed Packages
ricci.i386                               0.8-23.el5             installed

[root@tng3-2 ~]# yum list | grep Cluster
cluster-cim.i386                         0.8-21.el5             Cluster
cluster-snmp.i386                        0.8-21.el5             Cluster
ipvsadm.i386                             1.24-8.1               Cluster
kmod-gfs-xen.i686                        0.1.11-1.2.6.18_1.2740 ClusterStorage
kmod-gnbd-xen.i686                       0.1.1-12.2.6.18_1.2740 ClusterStorage
luci.i386                                0.8-23.el5             Cluster
piranha.i386                             0.8.4-7.el5            Cluster
system-config-cluster.noarch             1.0.35-1.0             Cluster
Comment 1 Len DiMaggio 2006-11-10 14:08:18 EST
Not a bug - the systems must be subcribed to RNH in order to download packages -
ricci performs a check for the presence of /etc/sysconfig/rhn/systemid - unless
this file is present, the download will not be performed. With this file in
place - even an empty file - the download is successful.

Comment 2 Len DiMaggio 2006-11-10 14:18:02 EST
Hang on.

Will this be an issue for users that install from DVD or via RHN Satellite?
Kevin,  Jim?
Comment 3 Stanko Kupcevic 2006-11-10 14:49:28 EST
RHN check will be removed from RHEL5 modrpm, meaning that as long as yum is
configured to use repos that have cluster rpms, deployment should proceed. 

Reopening bug
Comment 4 Stanko Kupcevic 2006-12-13 14:36:30 EST
Fixed in HEAD

Needs to be commited to RHEL5 as well. 
Comment 5 Stanko Kupcevic 2006-12-13 15:30:20 EST
Fixed in RHEL5
Comment 6 RHEL Product and Program Management 2006-12-22 21:29:16 EST
A package has been built which should help the problem described in 
this bug report. This report is therefore being closed with a resolution 
of CURRENTRELEASE. You may reopen this bug report if the solution does 
not work for you.
Comment 7 Len DiMaggio 2007-01-18 14:55:07 EST
I'm seeing the following behavior with these packages: luci-0.8-29.el5,
ricci-0.8-29.el5

1) Before attempting cluster creation, confirm presence of cluster packages in repo:

yum list | grep -i cluster
Cluster_Administration-en-US.noarch      5.0.0-0                installed       
lvm2-cluster.i386                        2.02.16-3.el5          installed       
modcluster.i386                          0.8-21.el5             installed       
Global_File_System-en-US.noarch          5.0.0-0                ClusterStorage  
cluster-cim.i386                         0.8-21.el5             Cluster         
cluster-snmp.i386                        0.8-21.el5             Cluster         
ipvsadm.i386                             1.24-8.1               Cluster         
kmod-gfs-xen.i686                        0.1.14-3.2.6.18_1.2910 ClusterStorage  
kmod-gnbd-xen.i686                       0.1.2-5.2.6.18_1.2910. ClusterStorage  
luci.i386                                0.8-26.el5             Cluster         
piranha.i386                             0.8.4-7.el5            Cluster         
ricci.i386                               0.8-26.el5             Cluster         
system-config-cluster.noarch             1.0.35-1.0             Cluster   
      
[root@tng3-1 ~]# yum list | grep -i luci
luci.i386                                0.8-29.el5             installed       
luci.i386                                0.8-26.el5             Cluster   
      
[root@tng3-1 ~]# yum list | grep -i ricci
ricci.i386                               0.8-29.el5             installed       
ricci.i386                               0.8-26.el5             Cluster        

2) Create the cluster via luci web app - note, the nodes tng3-1, tng3-2, tng3-3
were defined in the cluster. luci was running on node tng3-5. ricci was running
on all the nodes.

3) Observe this error for each node - as displayed by luci web app:

A problem occurred when installing packages: packages of set Cluster Base not
present in repository

4) Observe node tng3-1 reboot, /etc/cluster/cluster.conf was created containing
the following:
<?xml version="1.0"?>
<cluster alias="Jan18b" config_version="1" name="Jan18b">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="tng3-3.lab.msp.redhat.com" nodeid="1" votes="1"/>
                <clusternode name="tng3-2.lab.msp.redhat.com" nodeid="2" votes="1"/>
                <clusternode name="tng3-1.lab.msp.redhat.com" nodeid="3" votes="1"/>
        </clusternodes>
        <cman/>
        <fencedevices/>
        <rm/>
</cluster>

5) Nodes tng3-2 and tng3-3 did not reboot - did not receive the cluster.conf file.

6) luci web app in loop displaying graphic for node tng3-1 (Creating node
tng3-1.lab.msp.redhat.com for cluster Jan18b - Node still being created)

Comment 8 Len DiMaggio 2007-01-18 15:14:13 EST
Seeing the same problem with these packages:

yum list | grep -i cluster | sort
Cluster_Administration-en-US.noarch      5.0.0-0                installed       
cluster-cim.i386                         0.8-21.el5             Cluster         
cluster-snmp.i386                        0.8-21.el5             Cluster         
Global_File_System-en-US.noarch          5.0.0-0                ClusterStorage  
ipvsadm.i386                             1.24-8.1               Cluster         
kmod-gfs-xen.i686                        0.1.14-3.2.6.18_1.2910 ClusterStorage  
kmod-gnbd-xen.i686                       0.1.2-5.2.6.18_1.2910. ClusterStorage  
luci.i386                                0.8-26.el5             Cluster         
lvm2-cluster.i386                        2.02.16-3.el5          installed       
modcluster.i386                          0.8-21.el5             Cluster         
modcluster.i386                          0.8-27.el5             installed       
piranha.i386                             0.8.4-7.el5            Cluster         
ricci.i386                               0.8-26.el5             Cluster         
system-config-cluster.noarch             1.0.35-1.0             Cluster         



Note You need to log in before you can comment on or make changes to this bug.