Bug 129475 - pvcreate fails: Can't get lock for orphan PVs
pvcreate fails: Can't get lock for orphan PVs
Status: CLOSED NOTABUG
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: gfs (Show other bugs)
4
All Linux
medium Severity high
: ---
: ---
Assigned To: Alasdair Kergon
GFS Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-08-09 11:44 EDT by Derek Anderson
Modified: 2010-01-11 21:56 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-08-09 15:00:37 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Derek Anderson 2004-08-09 11:44:41 EDT
Description of problem:
pvcreate is failing on a quorate 2-node cluster.  The nodes, services,
status information for the cluster is posted below.

### Run it with one device...
[root@link-10 root]# pvcreate --debug /dev/sda1
  cluster send request failed: Bad address
  Can't get lock for orphan PVs
[root@link-10 root]# echo $?
5
[root@link-10 root]#

### Run it with two devices...
[root@link-10 root]# pvcreate --debug /dev/sda1 /dev/sda2
  cluster send request failed: Bad address
  Can't get lock for orphan PVs
  Physical volume "/dev/sda2" successfully created
  cluster send request failed: Bad address
[root@link-10 root]#

### Run is with three devices...
[root@link-10 root]# pvcreate --debug /dev/sda1 /dev/sda2 /dev/sdb1
  cluster send request failed: Bad address
  Can't get lock for orphan PVs
  Physical volume "/dev/sda2" successfully created
  cluster send request failed: Bad address
### We hang here...with processes in this state:
 2728 ?        S      0:00 ccsd
 2731 ?        SW<    0:00 [cman_comms]
 2732 ?        SW<    0:00 [cman_memb]
 2733 ?        SW     0:00 [cman_serviced]
 2734 ?        SW<    0:00 [cman_hbeat]
 2736 ?        S      0:00 fenced
 2742 ?        S      0:00 clvmd
 2743 ?        SW     0:00 [dlm_recoverd]
 2744 ?        SW     0:00 [dlm_astd]
 2745 ?        SW     0:00 [dlm_recvd]
 2746 ?        SW     0:00 [dlm_sendd]
 2813 pts/0    S      0:00 pvcreate --debug /dev/sda1 /dev/sda2 /dev/sdb1

### Cluster state info before pvcreate commands are run:
[root@link-10 root]# cat /proc/cluster/nodes
Node  Votes Exp Sts  Name
   1    1    1   M   link-10
   2    1    1   M   link-11
[root@link-10 root]# cat /proc/cluster/services

Service          Name                              GID LID State     Code
Fence Domain:    "default"                           1   2 run       -
[1 2]

DLM Lock Space:  "clvmd"                             2   3 run       -
[1 2]

[root@link-10 root]# cat /proc/cluster/status
Version: 2.0.1
Config version: 1
Cluster name: MILTONx2
Cluster ID: 19538
Membership state: Cluster-Member
Nodes: 2
Expected_votes: 1
Total_votes: 2
Quorum: 1
Active subsystems: 3
Node addresses: 192.168.44.160

[root@link-10 root]# cat /proc/partitions
major minor  #blocks  name

   3     0   39082680 hda
   3     1     104391 hda1
   3     2   37929465 hda2
   3     3    1044225 hda3
   8     0  142255575 sda
   8     1   71127787 sda1
   8     2   71127787 sda2
   8    16  142255575 sdb
   8    17   71127787 sdb1
   8    18   71127787 sdb2
   8    32  142255575 sdc
   8    33   71127787 sdc1
   8    34   71127787 sdc2
   8    48  142255575 sdd
   8    49   71127787 sdd1
   8    50   71127787 sdd2
   8    64  142255575 sde
   8    65   71127787 sde1
   8    66   71127787 sde2
   8    80  142255575 sdf
   8    81   71127787 sdf1
   8    82   71127787 sdf2
   8    96  142255575 sdg
   8    97   71127787 sdg1
   8    98   71127787 sdg2

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
On both nodes:
1. modprobe gfs && modprobe lock_dlm
2. ccsd
3. cman_tool join (wait for quorum)
4. fence_tool join
5. clvmd
On one node:
6. pvcreate --debug /dev/sda1
  
Actual results:


Expected results:


Additional info:
Comment 1 Christine Caulfield 2004-08-09 11:53:29 EDT
The kernel and userspace are out of step. Make sure you have the
latest kernel that matches the userspace you are using.
Comment 2 Derek Anderson 2004-08-09 15:00:37 EDT
I had checked out from CVS before all the latest fixes were checked 
in.  Also had old /lib/libdlm* garbage hanging around. 
Comment 3 Kiersten (Kerri) Anderson 2004-11-16 14:02:28 EST
Updating version to the right level in the defects.  Sorry for the storm.

Note You need to log in before you can comment on or make changes to this bug.