RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 731736 - Failover domain priority list is being ignored when service started manually
Summary: Failover domain priority list is being ignored when service started manually
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: rgmanager
Version: 6.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Lon Hohberger
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-08-18 14:04 UTC by Radek Steiger
Modified: 2011-08-22 00:35 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-22 00:18:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Radek Steiger 2011-08-18 14:04:55 UTC
Description of problem:

When starting a clustered service manually, it does not follow the priority list from an associated failover domain with Prioritized mode enabled. While priority is ignored in both cases, the actual result differs when started using CLI tool (clusvcadm) and Conga/luci Web GUI. 

A) With Conga a service is always started on a node with the lowest node ID, regardless the failover domain priority settings.

B) Using "clusvcadm" without specifying a cluster member start a service on the node where the command has been executed, ignoring the node priority list.

While there is no mention in the documentation about priorities being taken into account within manual start mode, one would expect the same behavior as starting the service automatically upon restarting or rebooting the whole cluster (which actually works fine).


Version-Release number of selected component (if applicable):

[root@z2 ~]# rpm -qa|grep rgmanager
rgmanager-3.0.12.1-3.el6.x86_64

[root@z2 ~]# rpm -qa|grep ricci
ricci-0.16.2-39.el6.x86_64

[root@z1 ~]# rpm -qa|grep luci
luci-0.23.0-17.el6.i686


How reproducible:

A) Conga: Create a 2-node cluster using Conga web GUI, a simple IP Address service and assign a Prioritized Failover domain to the service and the two nodes, with different priority order, than the Node ID order (i.e. Node with ID 1 with priority 2 and vice versa - Node ID 2 with priority 1). Then start the service manually and see as it starts on Node with ID 1. You can now try to manually migrate the service onto Node ID 2, stop it there and start it again. It will start back on the Node ID 1 despite the fact it is a lower priority Node.

B) clusvcadm: Create a simple 2-node scheme (may be the same as above), try stopping and starting service on each of the nodes using "clusvcadm -e service_name" without specifying node name. It always starts on a node where the command is being executed, not using the priority list. See cluster.conf below.


Steps to Reproduce:

A) Conga:
1. create 2-node cluster with failover domain as described above
2. assign the failover domain to a service
3. set a higher priority (priority 1) to the node with higher node ID
4. set lower priority (priority 2) to the node with lower node ID
5. start the service using web GUI without specifying where to start

B) clusvcadm:
1. create 2-node cluster with failover domain as described above
2. assign the failover domain with correct node priorities to a service
3. start the service using clusvcadm (without specifying a node name) on the node with lower priority

  
Actual results:

The service starts:
A) always on a node with lowest node ID - using Conga
B) always on a node where clusvcadm is being executed - using clusvcadm


Expected results:

The service starts:
A) on a node with highest priority - using Conga
B) on a node with highest priority - using clusvcadm


Additional info:

[root@z1 ~]# ccs -h z2 --checkconf
All nodes in sync.

[root@z1 ~]# ccs -h z2 --lsnodes
z2: nodeid=2
z4: nodeid=3

[root@z1 ~]# ccs -h z2  --lsfailoverdomain
failover: restricted=0, ordered=1, nofailback=0
  z2: priority=2
  z4: priority=1

[root@z1 ~]# ccs -h z2 --getconf
<cluster config_version="30" name="zcluster">  
  <clusternodes>    
    <clusternode name="z2" nodeid="2">      
      <fence>        
        <method name="Method">          
          <device name="APC" port="2"/>          
        </method>        
      </fence>      
    </clusternode>    
    <clusternode name="z4" nodeid="3">      
      <fence>        
        <method name="Method">          
          <device name="WTI" port="B1"/>          
        </method>        
      </fence>      
    </clusternode>    
  </clusternodes>  
  <fencedevices>    
    <fencedevice agent="fence_apc" ipaddr="zapc" login="apc" name="APC" passwd="apc"/>    
    <fencedevice agent="fence_wti" ipaddr="zwti" login="wti" name="WTI" passwd="password"/>    
  </fencedevices>  
  <rm>    
    <failoverdomains>      
      <failoverdomain name="failover" nofailback="0" ordered="1" restricted="0">        
        <failoverdomainnode name="z2" priority="2"/>        
        <failoverdomainnode name="z4" priority="1"/>        
      </failoverdomain>      
    </failoverdomains>    
    <service domain="failover" name="ipaddress" recovery="relocate">      
      <ip address="10.15.89.202" sleeptime="10"/>      
    </service>    
  </rm>  
  <cman expected_votes="1" two_node="1"/>  
</cluster>

[root@z1 ~]# ccs -h z4 --getconf
<cluster config_version="30" name="zcluster">  
  <clusternodes>    
    <clusternode name="z2" nodeid="2">      
      <fence>        
        <method name="Method">          
          <device name="APC" port="2"/>          
        </method>        
      </fence>      
    </clusternode>    
    <clusternode name="z4" nodeid="3">      
      <fence>        
        <method name="Method">          
          <device name="WTI" port="B1"/>          
        </method>        
      </fence>      
    </clusternode>    
  </clusternodes>  
  <fencedevices>    
    <fencedevice agent="fence_apc" ipaddr="zapc" login="apc" name="APC" passwd="apc"/>    
    <fencedevice agent="fence_wti" ipaddr="zwti" login="wti" name="WTI" passwd="password"/>    
  </fencedevices>  
  <rm>    
    <failoverdomains>      
      <failoverdomain name="failover" nofailback="0" ordered="1" restricted="0">        
        <failoverdomainnode name="z2" priority="2"/>        
        <failoverdomainnode name="z4" priority="1"/>        
      </failoverdomain>      
    </failoverdomains>    
    <service domain="failover" name="ipaddress" recovery="relocate">      
      <ip address="10.15.89.202" sleeptime="10"/>      
    </service>    
  </rm>  
  <cman expected_votes="1" two_node="1"/>  
</cluster>

Comment 2 Lon Hohberger 2011-08-22 00:18:25 UTC
clusvcadm -F -e [service_name]

See 'clusvcadm -h'.

Comment 3 Lon Hohberger 2011-08-22 00:33:12 UTC
Luci simply has a dropdown list; it's equivalent to executing:

  clusvcadm -e [service_name] -n [selected_node]

Bug 715052 should probably be copied to RHEL 6; it is related to the clusvcadm man page missing documentation for the -F option.

Comment 4 Lon Hohberger 2011-08-22 00:35:13 UTC
If this is a request to change the default in luci/ricci to enable according to priority lists, please reopen and assign the bug to luci.


Note You need to log in before you can comment on or make changes to this bug.