Bug 1464045 - activating volume locking error
activating volume locking error
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: LVM and device-mapper development team
Depends On:
  Show dependency treegraph
Reported: 2017-06-22 06:38 EDT by Christoph
Modified: 2017-06-22 16:07 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-06-22 12:05:10 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
vgchange -aey -dddddd vg_home (26.88 KB, text/plain)
2017-06-22 07:44 EDT, Christoph
no flags Details

  None (edit)
Description Christoph 2017-06-22 06:38:35 EDT
Description of problem:

Running a rhel7 pacemaker based cluster. With dlm, clvmd and a ha-lvm volume. The ha-lvm volume is activated using clustered locking. 

On cluster start the volume gets activated as expected but an error is logged

ERROR: Error locking on node 1: Device or resource busy 1 logical volume(s) in volume group "vg_home" now active (vgchange -aey vg_home resulted in 5)

Version-Release number of selected component (if applicable):


How reproducible:

always after a full cluster start. 

Steps to Reproduce:
1. install os, pacemaker
2. vgcreate vg_home /dev/mapper/mpath-...
3. lvcreate -L 100G -n home vg_home
4. mkfs.xfs -L home /dev/mapper/vg_home-home
5. pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
6. pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
7. pcs constraint order dlm-clone then clvmd-clone
8. pcs constraint colocation add clvmd-clone with dlm-clone
9. /sbin/lvmconf --enable-cluster
10. vgchange -c y vg_home
11. pcs resource create vg_home ocf:heartbeat:LVM volgrpname=vg_home exclusive=yes
12. pcs constraint order clvmd-clone then vg_home
13. pcs constraint colocation vg_home with clvmd-clone
14. rebuild initramfs to ensure lvm.conf is updated
15. shutdown all nodes
16. power on all nodes
17. check log

Actual results:

* cluster goes up
* LVM resource agent /usr/lib/ocf/resource.d/heartbeat/LVM is started
* tries vgchange -aey vg_home but has an exit code 5
* error is logged
* waits five seconds
* executes  retry_exclusive_start
* which deactivates all volumes and tries again
* this one succeeds

Expected results:

cluster goes up, volume is activated

Additional info:

obtain_device_list_from_udev = 1
locking_type = 3
use_lvmetad = 0
Comment 2 Zdenek Kabelac 2017-06-22 07:00:44 EDT
The workflow in description seems to be telling:

1. Create non-clustered VG  'vg_home'

2. Create active non-clustered LV   'home'

3. Flip locking to clustered type (so cluster locking is UNAWARE there is active logical volume 'vg_home/home'.

4. Flip VG to clustered type.

And then you get error from 'clvmd' it cannot activate LV which has no lock - so should be activable and fails because something 'holding' the slot in dm-table with exact same prevents creation of the same entry again ?


1. Try to 'deactivate'  all LVs in VG you are flipping locking and clustering.

2. If you are changing locking type - you basically have to make sure all the volumes on your machine are ACTIVATED with this locking type.  You can't randomly switch locking type while  object you plan to work with are alive.

3. LVM2 is not sanitizing system after every change of lvm.conf
Comment 3 Christoph 2017-06-22 07:43:35 EDT
1. disabled pacemaker resource "pcs resource disable vg_home"
2. verify on all nodes that not lv in vg_home is active "lvs | grep vg_home"

home     vg_home     -wi-------

3. flip locking to not clustered "vgchange -cn vg_home"
4. verify on all nodes that vg_home it is now non-clustered
5. flip clustered back on "vgchange -cy vg_home"
6. verify on all nodes that vg_home is now clustered "vgs | grep vg_home"

  vg_home       1   1   0 wz--nc  

7. reenable pacemaker resource "pcs resource enable vg_home"
8. power off all nodes
9. power on all nodes
9. same issue

modified Resource Agent to include debug messages and attached a log
Comment 4 Christoph 2017-06-22 07:44 EDT
Created attachment 1290658 [details]
vgchange -aey -dddddd vg_home
Comment 5 Zdenek Kabelac 2017-06-22 07:55:46 EDT
(In reply to Christoph from comment #4)
> Created attachment 1290658 [details]
> vgchange -aey -dddddd vg_home

Don't you  have some  old locks present in your locking manager ?

Jun 22 13:32:26 asl440 lvm[6331]: dm info  LVM-CYThzNBkbF8kWAiiqKW8aWFsvZPXgTPclm6HbPIsOVNhqFN38dqGYYJIiaOJKhac [ noopencount flush ]   [16384] (*1)
Jun 22 13:32:26 asl440 lvm[6331]: Lock held for CYThzNBkbF8kWAiiqKW8aWFsvZPXgTPclm6HbPIsOVNhqFN38dqGYYJIiaOJKhac, node UNKNOWN 3 : CR
Jun 22 13:32:26 asl440 lvm[6331]: Lock held for CYThzNBkbF8kWAiiqKW8aWFsvZPXgTPclm6HbPIsOVNhqFN38dqGYYJIiaOJKhac, node UNKNOWN 2 : CR
Jun 22 13:32:26 asl440 lvm[6331]: Lock held for CYThzNBkbF8kWAiiqKW8aWFsvZPXgTPclm6HbPIsOVNhqFN38dqGYYJIiaOJKhac, node 1 : CR
Jun 22 13:32:26 asl440 lvm[6331]: vg_home/home is active locally and remotely
Jun 22 13:32:26 asl440 lvm[6331]: Locking LV CYThzNBkbF8kWAiiqKW8aWFsvZPXgTPclm6HbPIsOVNhqFN38dqGYYJIiaOJKhac EX (LV|NONBLOCK|CLUSTER|LOCAL) (0xdd)
Jun 22 13:32:26 asl440 lvm[6331]: Error locking on node 1: Device or resource busy

The trace is simply telling  'vg_home/home' is ALREADY active on the machine.

Haven't you create LV  with 'shared' lock  (non-exlusive one) so the LV is already activated around your cluster ?

Please provide  the state of DM tables on ALL nodes before
your run your commands.


dmsetup info -c  (on ALL cluster nodes)
lvchange -aeg  vg_home/home

(eventually stale of dlm locking manager)

Also note you surely cannot activate volume exclusively on more then 1 node, but you can easily activate  non-exlusively on all your nodes with 1 command on on node in your cluster.
Comment 6 Christoph 2017-06-22 11:20:00 EDT
i disabled the vg_home resource and rebooted the cluster.

So pacemaker, dlm and clvmd are running. The resource vg_home is still disabled.

All the logical volume home are active on all nodes. The moment clvm starts it activates all clustered volume groups

Jun 22 13:54:20 asl440 clvm(clvmd)[5358]: INFO: 1 logical volume(s) in volume group "vg_home" now active

This is a good behavior for gfs based filesystems. Volume groups need to be active on all nodes. No need to specify the activation of the volume group.

But for HA-LVM an exclusive activation is requested. Cluster boots, starts clvmd, and the volume group is active on all nodes. Then the resource vg_home wants to start it exclusive. This fails as the volume is active on all nodes.
The other nodes execute a monitoring task for vg_home and deactivate it. Once the five second timeout on the first node expired it will try to activate it again and succeed.

Luckily clvmd has a parameter "activate_vgs" when defining the clvmd resource. I updated clvmd 

pcs resource update clvmd activate_vgs=false

and now everything works as expected. --> resolved


On this cluster only HA LVMs are in use. If i would mix in GFS filesystems i would need to manually activate their corresponding volums.
I use clvmd instead of lvm-tagging as on the other clusters i need clvmd for GFS and want a similar setup.

> lvchange -aeg  vg_home/home

i guess here was a typo and it should be "-aey". If i didn't miss something rhcs supports exclusive volume groups but not exclusive volumes
Comment 7 Zdenek Kabelac 2017-06-22 12:05:10 EDT

-aeg - yes typo.

However reading the top of your message suggest there is some flaw somewhere in your workflow.

Lvm2 does not activate anything on its own - it always admin selecting how the LV should be activated in cluster. 

So if you find the LV being already active on all nodes  - something must have been running  i.e.  'vgchange -ay'   which has  not 'exclusive' activation.

So  please find your 'badly' working command -  instead of doing this rather overcomplexed workaround.  Always activate volume in place where you need them.

If you activate them on other nodes - something can open/mount them and you could not even deactivate them.
Comment 8 Christoph 2017-06-22 15:26:25 EDT
activation is done in the resource agent of clvm (/usr/lib/ocf/resource.d/heartbeat/clvm or https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm)

function clvmd_start() calls function clvmd_activate_all() which is basicly

"ocf_run vgchange -aay"

so if a clustered volume group is used, by default the clvm resource agent will activate it on all nodes.
Comment 9 Zdenek Kabelac 2017-06-22 15:34:03 EDT
Yep - doesn't make any sense to me  to use this command.

So looks like you've found your flaw.
Comment 10 David Teigland 2017-06-22 16:07:00 EDT
The clvm design was based on the old "single system image" concept, where all LVs should always be active everywhere.  This concept died long ago, but clvm has been stuck with it.  In reality, only the small number of LVs used by gfs should be active on more than one node.  clvm has been tweaked over the years to try to accomodate reality.  All of this makes clvm poorly suited for failover/HA.
Failover using tags with non-clustered VGs was added as a better option, although not perfect.

Two recent major developments will put all of that behind us:

1. system ID for VG ownership will be a better way to do failover (and protection won't depend on hosts being configured correctly.)  This bug is tracking the progress of getting cluster/resource scripts available for that:

2. lvmlockd was written to replace clvm with a completely different design, based on how shared storage is actually used.

Note You need to log in before you can comment on or make changes to this bug.