Bug 299871 - qdiskd node is erroneously considered a full cluster node
qdiskd node is erroneously considered a full cluster node
Status: CLOSED DUPLICATE of bug 237386
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: cman (Show other bugs)
5.0
i386 Linux
low Severity medium
: ---
: ---
Assigned To: Christine Caulfield
GFS Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-09-21 05:28 EDT by Alain RICHARD
Modified: 2009-04-16 18:51 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-09-25 03:06:00 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Alain RICHARD 2007-09-21 05:28:19 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X; fr) AppleWebKit/419.3 (KHTML, like Gecko) Safari/419.3

Description of problem:
I am using a qdiskd node on a two node cluster. The qdiskd daemon run on the two nodes and the 
qdiskd heuristic and master/slave process works correctly. I optain :

[root@titan1 ~]# clustat 
msg_open: No such file or directory
Member Status: Quorate

  Member Name                        ID   Status
  ------ ----                        ---- ------
  titan1                                1 Online, Local
  titan2                                2 Online
  /dev/mpath/qdsk1                   0 Online, Quorum Disk

The problem is that some parts of the cluster management and utilities think the Quorum Disk is a real 
cluster node. 

For example ccs_tool fails to send updated conf file to all members :

[root@titan1 ~]# ccs_tool update /etc/cluster/cluster.conf 
Failed to receive COMM_UPDATE_COMMIT_ACK from /dev/mpath/qdsk1.
Hint: Check the log on /dev/mpath/qdsk1 for reason.

Failed to update config file.
[root@titan1 ~]#

lvm2 (and clvmd) fails to gain locking on lv :

[root@titan1 ~]# lvextend --size 200m /dev/san1/test
  Extending logical volume test to 200,00 MB
  clvmd not running on node /dev/mpath/qdsk1
  Failed to suspend test
[root@titan1 ~]# 


Version-Release number of selected component (if applicable):
cman-2.0.64-1.0.1.el5

How reproducible:
Always


Steps to Reproduce:
1.configure a quorum disk partition on the cluster
2.use ccs_tool update or any lvm2 commands requiring clvmd to gain lock access


Actual Results:
The commands fails or hang at waiting for the quorum node to respond to cman sollicitations.


Expected Results:
The quorum node must be recognized as asuch and ignored by theses subsystems (ccs, clvmd).

Additional info:
Comment 1 Lon Hohberger 2007-09-24 16:52:29 EDT
I think there's a dup bug against lvm2-cluster which has already been fixed for
5.1; Patrick was working on it.
Comment 2 Christine Caulfield 2007-09-25 03:06:00 EDT

*** This bug has been marked as a duplicate of 237386 ***

Note You need to log in before you can comment on or make changes to this bug.