RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 620342 - Nodes added to cluster with no cluster.conf
Summary: Nodes added to cluster with no cluster.conf
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: luci
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Ryan McCabe
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-08-02 09:10 UTC by Andrew Beekhof
Modified: 2010-11-29 17:50 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-29 17:50:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Andrew Beekhof 2010-08-02 09:10:38 UTC
Description of problem:

Adding a node to a cluster is only partially succeeding.
I found the relevant batch file, it has as much info as I can find.


<?xml version="1.0"?>
<batch batch_id="1820464634" status="4">
        <module name="rpm" status="0">
                <response API_version="1.0" sequence="">
                        <function_response function_name="install">
                                <var mutable="false" name="success" type="boolean" value="true"/>
                        </function_response>
                </response>
        </module>
        <module name="service" status="0">
                <response API_version="1.0" sequence="">
                        <function_response function_name="disable">
                                <var mutable="false" name="success" type="boolean" value="true"/>
                        </function_response>
                </response>
        </module>
        <module name="rpm" status="0">
                <response API_version="1.0" sequence="">
                        <function_response function_name="install">
                                <var mutable="false" name="success" type="boolean" value="true"/>
                        </function_response>
                </response>
        </module>
        <module name="cluster" status="3">
                <request API_version="1.0">
                        <function_call name="set_cluster.conf">
                                <var mutable="false" name="propagate" type="boolean" value="false"/>
                                <var mutable="false" name="cluster.conf" type="xml">
                                        <cluster config_version="2" name="beekhof">
                                                <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
                                                <clusternodes>
                                                        <clusternode name="pcmk-2" nodeid="1" votes="1">
                                                                <fence/>
                                                        </clusternode>
                                                        <clusternode name="pcmk-3" nodeid="2" votes="1">
                                                                <fence/>
                                                        </clusternode>
                                                        <clusternode name="pcmk-4" nodeid="3" votes="1">
                                                                <fence/>
                                                        </clusternode>
                                                </clusternodes>
                                                <cman/>
                                                <fencedevices/>
                                                <rm>
                                                        <failoverdomains/>
                                                        <resources/>
                                                </rm>
                                        </cluster>
                                </var>
                        </function_call>
                </request>
        </module>
        <module name="rpm" status="5">
                <request API_version="1.0">
                        <function_call name="install"/>
                </request>
        </module>
        <module name="cluster" status="5">
                <request API_version="1.0">
                        <function_call name="start_node">
                                <var mutable="false" name="enable_services" type="boolean" value="true"/>
                        </function_call>
                </request>
        </module>
</batch>


Version-Release number of selected component (if applicable):

luci-0.22.2-1.auto1279839662

Comment 2 Andrew Beekhof 2010-08-02 09:13:54 UTC
Sorry, forgot to mention that removing and re-adding the node allows the node to be successfully added.

ricci-0.16.2-13.el6.x86_64
luci-0.22.3-1.auto1280502062.x86_64

Comment 3 RHEL Program Management 2010-08-02 09:27:39 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 5 Lon Hohberger 2010-08-04 20:16:09 UTC
Andrew, were there any AVC denials on the target machine?

Comment 6 Andrew Beekhof 2010-08-05 07:12:28 UTC
(In reply to comment #5)
> Andrew, were there any AVC denials on the target machine?    

Unlikely since the second attempt worked (ie. removing the node and re-adding it).
Also I'm pretty sure selinux was disabled.


Note You need to log in before you can comment on or make changes to this bug.