Bug 888393 - [RFE] clustat is not reporting correct state of service
Summary: [RFE] clustat is not reporting correct state of service
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: rgmanager
Version: 6.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Ryan McCabe
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-18 16:05 UTC by michal novacek
Modified: 2013-04-17 14:27 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-04-17 14:27:17 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description michal novacek 2012-12-18 16:05:30 UTC
Description of problem:

I'm having two node cluster with quorum disk and no fencing. Kind of fencing is done by hw watchdogs but that has no influence on this testcase).

I'm running virtual ip on second node. When this nodes is forcibly powered down
the other node correctly recognizes that this node is offline and correctly does
not lose quorum. The problem is that the service running on (now offlined) second node is still reported as correctly running instead of being shown as
running in last known location.

Version-Release number of selected component (if applicable):
cman-3.0.12.1-46.el6.x86_64

How reproducible: always

Reproducer:
sest-node01$ cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster config_version="38" name="STSRHTS8050">
    <clusternodes>
       <clusternode name="sest-node01" nodeid="1"/>
       <clusternode name="sest-node02" nodeid="2"/>
    </clusternodes>
    <fence_daemon post_join_delay="30"/>
    <totem/>
    <quorumd label="qd"/>
    <rm>
       <failoverdomains>
          <failoverdomain name="le-domain" ordered="1">
             <failoverdomainnode name="sest-node01" priority="2"/>
             <failoverdomainnode name="sest-node02" priority="1"/>
          </failoverdomain>
       </failoverdomains>
       <service domain="le-domain" name="v-ip" recovery="relocate">
             <ip address="192.168.202.100/24" disable_rdisc="1" sleeptime="10"/>
       </service>
    </rm>
</cluster>

sest-node01$ clustat
Cluster Status for STSRHTS8050 @ Tue Dec 18 09:49:02 2012
Member Status: Quorate

 Member Name            ID   Status
 ------ ----            ---- ------
 sest-node01                1 Online, Local, rgmanager
 sest-node02                2 Online, rgmanager
 /dev/block/252:16          0 Online, Quorum Disk

 Service Name            Owner (Last)           State         
 ------- ----            ----- ------           -----         
 service:v-ip            sest-node02            started 

Now power off node02 with power button and wait for a short while before it
gets recognized as such by node01.

sest-node01$ clustat
Cluster Status for STSRHTS8050 @ Tue Dec 18 09:49:12 2012
Member Status: Quorate

 Member Name            ID   Status
 ------ ----            ---- ------
 sest-node01                1 Online, Local, rgmanager
 sest-node02                2 Offline
 /dev/block/252:16          0 Online, Quorum Disk

 Service Name            Owner (Last)           State         
 ------- ----            ----- ------           -----         
 service:v-ip            sest-node02            started 


Actual results:
Service is reported as correctly running on node that is offline.

Expected results:
Service should be reported as running in last known location 
(brackets). I would like to get cached=1 paramater to xml output
as mentioned in bz816881 comment 7 as well.

Comment 2 Fabio Massimo Di Nitto 2012-12-18 16:30:40 UTC
This is correct and expected.

As long as node2 does NOT rejoin the cluster, the service cannot be recovered on node1.

This is at the base of hw based fencing.

I am sure if you check fence_tool ls on node1, you will see it´s pending "fencing".

This is also documented as part of hw watchdog fencing.

Comment 3 Fabio Massimo Di Nitto 2012-12-18 16:32:28 UTC
Please check data from Comment #2 and if that´s not the case, please attach logs from /var/log/cluster from both nodes.

Comment 4 michal novacek 2012-12-19 12:30:02 UTC
(In reply to comment #2)
...

Yes, it is exactly as you mention it -- the node number two is waiting to be fenced which means that the service will not be relocated, which is correct.

I do not want the service to be relocated I would like to have the reporting to be more accurate to tell me that it has only cached information and that it might not be correct for example by putting it into brackets.

Comment 5 Fabio Massimo Di Nitto 2012-12-19 12:50:48 UTC
Ok thanks, now i understand the report better. Not sure it can be done tho because changing output format at this stage is a bit risky for tools that might be parsing it.

Moving to 6.5 either way. 6.4 is closed for RFE.

Comment 6 Fabio Massimo Di Nitto 2013-04-17 14:27:17 UTC
Fixing this corner case will require output change that can break scripts monitoring the cluster.


Note You need to log in before you can comment on or make changes to this bug.