Bug 450763 - vgsplit fails on cluster mirrors
vgsplit fails on cluster mirrors
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2-cluster (Show other bugs)
4.7
All Linux
high Severity high
: rc
: ---
Assigned To: Milan Broz
Corey Marthaler
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-06-10 16:47 EDT by Corey Marthaler
Modified: 2013-02-28 23:06 EST (History)
9 users (show)

See Also:
Fixed In Version: RHBA-2008-0806
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-07-25 15:26:47 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
verbose vgsplit output (96.87 KB, text/plain)
2008-06-10 16:49 EDT, Corey Marthaler
no flags Details

  None (edit)
Description Corey Marthaler 2008-06-10 16:47:23 EDT
Description of problem:
This is related to bz 444608. Attempting to vgsplit a cmirror still fails. It
says it needs to be inactive even though it already is.

[root@grant-03 tmp]# lvs -a -o +devices
  LV                        VG            Attr   LSize  Origin Snap%  Move Log 
               Copy%  Convert Devices
  mirror_1_29160            mirror_1_2916 mwim-- 72.21G                   
mirror_1_29160_mlog               
mirror_1_29160_mimage_0(0),mirror_1_29160_mimage_1(0)
  [mirror_1_29160_mimage_0] mirror_1_2916 Iwi--- 72.21G                        
                              /dev/sdb1(0)
  [mirror_1_29160_mimage_1] mirror_1_2916 Iwi--- 72.21G                        
                              /dev/sdb2(0)
  [mirror_1_29160_mlog]     mirror_1_2916 lwi---  4.00M                        
                              /dev/sdb3(0)

[root@grant-03 tmp]# vgsplit mirror_1_2916 split_241 /dev/sdb1 /dev/sdb2 /dev/sdb3
  Error locking on node grant-03: Volume is busy on another node
  Logical volume "mirror_1_29160_mlog" must be inactive


Version-Release number of selected component (if applicable):
[root@grant-03 tmp]# rpm -q lvm2
lvm2-2.02.37-1.el4
[root@grant-03 tmp]# rpm -q lvm2-cluster
lvm2-cluster-2.02.37-1.el4
[root@grant-03 tmp]# rpm -q device-mapper
device-mapper-1.02.25-2.el4


How reproducible:
Everytime
Comment 1 Corey Marthaler 2008-06-10 16:49:41 EDT
Created attachment 308866 [details]
verbose vgsplit output
Comment 2 Milan Broz 2008-06-11 04:06:31 EDT
Cannot reproduce it here, it works on my cfg.

so it fails here:
#locking/cluster_locking.c:454       Locking LV
IbOTLP3ee4SefnHK5lyD2ZRRrhNPfWwAuBskVJDjOUKEGeR7wKXEEaeKEci35aZ5 EX C (0x9d)
#locking/cluster_locking.c:358   Error locking on node grant-03: Volume is busy
on another node
#vgsplit.c:105   Logical volume "mirror_1_29160_mlog" must be inactive

tht's simple query if (lv_is_active(lv)) ... so there must be lock held


Please could you attach log from clvmd -d from that (local) node?

Does it work, if you restart clvmd immediately before vgsplit on that node?
(not clvmd -R, I mean real restart to force initial locks reread)
Comment 4 Milan Broz 2008-06-11 07:29:22 EDT
ok, have reproducer:

+ vgremove -f vg_bar
  Logical volume "lv" successfully removed
  Volume group "vg_bar" successfully removed
+ vgremove -f vg_bar1
  Logical volume "lv_mirr" successfully removed
  Volume group "vg_bar1" successfully removed
+ vgcreate -c y vg_bar /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
  Clustered volume group "vg_bar" successfully created
+ lvcreate -m1 --nosync -n lv_mirr -L 100M vg_bar /dev/sdd1 /dev/sde1 /dev/sdf1
  WARNING: New mirror won't be synchronised. Don't read what you didn't write!
  Logical volume "lv_mirr" created
+ lvcreate -n lv -L 100M vg_bar /dev/sdg1
  Logical volume "lv" created
+ vgchange -a n vg_bar
  0 logical volume(s) in volume group "vg_bar" now active
+ vgchange -a y vg_bar
  2 logical volume(s) in volume group "vg_bar" now active
+ lvchange -a n vg_bar/lv_mirr
+ vgsplit vg_bar vg_bar1 /dev/sdd1 /dev/sde1 /dev/sdf1
  Error locking on node bar-01.englab.brq.redhat.com: Volume is busy on another node
  Logical volume "lv_mirr_mlog" must be inactive
Comment 5 Milan Broz 2008-06-11 07:30:40 EDT
There is problem with locks for hidden volumes - it sometimes gets into DLM, see:

   1.
      CLVMD[b7fab8e0]: Jun 11 12:23:22 SIGTERM received
   2.
      CLVMD[b7fab8e0]: Jun 11 12:23:22 sync_unlock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jjZVwzCLomJ2FnaHg8wyPxqWHQAZVgGpY' lkid:10328
   3.
      CLVMD[b7fab8e0]: Jun 11 12:23:22 sync_unlock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jBX8Nvul3x5xnXu3MJ1RyWGJndKAK8TBj' lkid:102c9
   4.
      CLVMD[b7fab8e0]: Jun 11 12:23:22 sync_unlock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jXcIaB1QczEhQvbKIvO1ezjGgQh00uE3m' lkid:10032
   5.
      CLVMD[b7fab8e0]: Jun 11 12:23:22 sync_unlock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jAidHKowUBq6zWTAnXlknPIgC2W2Nw1nr' lkid:20032
   6.
      CLVMD[b7fab8e0]: Jun 11 12:23:22 sync_unlock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jf519xI71myLP4aTvuhB39531ke9e8i7M' lkid:100d8
   7.
      [root@bar-01 ~]# clvmd -d
   8.
      CLVMD[b7fe78e0]: Jun 11 12:23:34 CLVMD started
   9.
      CLVMD[b7fe78e0]: Jun 11 12:23:34 Connected to CMAN
  10.
      CLVMD[b7fe78e0]: Jun 11 12:23:34 CMAN initialisation complete
  11.
      CLVMD[b7fe78e0]: Jun 11 12:23:37 DLM initialisation complete
  12.
      CLVMD[b7fe78e0]: Jun 11 12:23:37 Cluster ready, doing some more initialisation
  13.
      CLVMD[b7fe78e0]: Jun 11 12:23:37 starting LVM thread
  14.
      CLVMD[b7fe78e0]: Jun 11 12:23:37 clvmd ready for work
  15.
      CLVMD[b7fe78e0]: Jun 11 12:23:37 Using timeout of 60 seconds
  16.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 LVM thread function started
  17.
      File descriptor 4 left open
  18.
      File descriptor 7 left open
  19.
        WARNING: Locking disabled. Be careful! This could corrupt your metadata.
  20.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 getting initial lock for
LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jBX8Nvul3x5xnXu3MJ1RyWGJndKAK8TBj
  21.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 sync_lock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jBX8Nvul3x5xnXu3MJ1RyWGJndKAK8TBj' mode:1 flags=1
  22.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 sync_lock: returning lkid 1007b
  23.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 getting initial lock for
LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jjZVwzCLomJ2FnaHg8wyPxqWHQAZVgGpY
  24.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 sync_lock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jjZVwzCLomJ2FnaHg8wyPxqWHQAZVgGpY' mode:1 flags=1
  25.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 sync_lock: returning lkid 10345
  26.
      CLVMD[b75e5ba0]: Jun 11 12:23:37 LVM thread waiting for work
  27.
      CLVMD[b7fe78e0]: Jun 11 12:23:47 SIGTERM received
  28.
      CLVMD[b7fe78e0]: Jun 11 12:23:47 sync_unlock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jjZVwzCLomJ2FnaHg8wyPxqWHQAZVgGpY' lkid:10345
  29.
      CLVMD[b7fe78e0]: Jun 11 12:23:47 sync_unlock:
'LMx02qgPEE5rCgrOBQskqP8pM8vu5t8jBX8Nvul3x5xnXu3MJ1RyWGJndKAK8TBj' lkid:1007b 
Comment 7 Milan Broz 2008-06-12 03:59:06 EDT
problem is in
vgchange -a y/n
which requests locks for all volumes, not only high level ones

lvchange seem to work the expected way (only one lock for mirror)
Comment 9 Milan Broz 2008-06-12 07:51:26 EDT
fixed upstream
Comment 10 Milan Broz 2008-06-12 07:52:06 EDT
... in 2.02.39, Fix vgchange to not activate mirror leg and log volumes directly.
Comment 12 Milan Broz 2008-06-12 11:00:37 EDT
Patch in 2.02.37-3.el4.
Comment 13 Corey Marthaler 2008-06-13 09:58:21 EDT
Fix verified in lvm2-2.02.37-3.el4/lvm2-cluster-2.02.37-3.el4
Comment 16 errata-xmlrpc 2008-07-25 15:26:47 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2008-0806.html

Note You need to log in before you can comment on or make changes to this bug.