Red Hat Bugzilla – Bug 1009812
LVM logical volumes on FC SDs are activated automatically after hypervisor reboot
Last modified: 2016-02-10 15:28:00 EST
Description of problem:
When a hypervisor is rebooted all LVs which are part of a FC storage domain are automatically activated. This can cause an issues as logical volumes should by activated only on request of the engine if needed (VM start) and deactivate immediately when the LVs are not needed (VM stop).
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create a new FC storage domain
2. Create a VM with a disk so some LVs are created
3. Put the hypervisor to maintenance mode and restart it.
4. Log on the hypervisor and check the status of the LVs
All LVs within the FC storage domain are activated
No LVs within the FC storage domain are activated
This issue is related only to FC SDs as the LVs are activated by LVM-monitor that starts in the early boot. The LVM monitor is started as the first service. iSCSI i snot affected as the iSCSI daemon starts later.
Suggested change attached to force the refresh of active LVs before use. Not sure how valid this approach is over deactivating volumes when the host initially connects to a domain...
The fix will include setting flags for the vgs/lvs so they won't be activated on boot.
Yeela, from comment 10 I gather that the enclosed patch is not a proper fix for this issue?
If so, please remove it from the external tracker.
The patch is required but not sufficient as it solves the symptom - what to do when the lv is already active (which we can reach in other ways as well which is why it is needed) and not the underlying problem - prevent the LV from being active to begin with.
LVM supports changing this configuration
Need to make sure to take care both of newly created VGs/LVs and existing ones.
Please specify which RHEL version is installed on the hosts.
If the version is earlier then 6.4, please attach these files from one of the hosts:
On RHEL 6.4 or later, we can prevent auto-activation of vdsm volumes by specifying which volume will be auto-activated. Any other volumes will not be auto-activated.
To specify which volumes should be auto-activated, edit this line in /etc/lvm/lvm.conf:
auto_activation_volume_list = ["vg0"]
Where "vg0" is the name of the system lvm volume group created during installation.
For example, on a system when this workaround was tested:
VG #PV #LV #SN Attr VSize VFree
test 2 2 0 wz--n- 39.99g 37.99g
vg0 1 3 0 wz--n- 465.27g 0
If the hosts have other volume groups beside vdsm volume groups, you must add them to the auto_activation_volume_list as well, or they will not be activate on boot.
On FC system, physical volumes are connected to the system early on boot, when rc.sysinit or netfs run. These scripts perform auto-activation of all lvm volume groups, which activates all lvs on shared storage.
From vdsm point of view, all logical voluems *must* be deactivated until they are used. Currently when vdsm is trying to activate a logical volume and it is already active it does nothing. If the logical volume was modified by the SPM, this logical volume meta data is now wrong, which may lead to data corruption when writing to the volume.
We plan to fix this issue by deactivating vdsm volumes during boot.
(In reply to Nir Soffer from comment #14)
> Please specify which RHEL version is installed on the hosts.
> If the version is earlier then 6.4, please attach these files from one of
> the hosts:
it was RHEL 6.4
The complete solution includes:
- Deactivate unused lvs when service is started:
This handle the root cause, lvs auto-activated during boot. This patch
also ensure that there are no active lvs after unclean shutdown of the
process. With this patch, we should not see unused active lvs under
- Refresh active lvs when activating volumes
Without the previous patch, this ensure that we do not use an active lv
without refreshing it. With the previous patch, this serve as a second
layer of protection, ensuring correctness even in abnormal condition
where lv is left active when it should not.
(In reply to Nir Soffer from comment #19)
> The complete solution includes:
> - Refresh active lvs when activating volumes
> Without the previous patch, this ensure that we do not use an active lv
> without refreshing it. With the previous patch, this serve as a second
> layer of protection, ensuring correctness even in abnormal condition
> where lv is left active when it should not.
I've just NACK'd the upstream patch for this part. Thus far my testing on F19 has shown that lvchange --refresh doesn't always result in a volume being correctly updated. I'd like to look into this more and repeat this downstream before verifying the change.
In addition the change now depends on the 'Single shot prepare' change  to avoid multiple refresh / activation calls. AFAIK this isn't viable for 3.2.z.
The plan was to have this fixed in 3.2.5 but given the above I think we need to change our approach here. My suggestion at this point is to split this BZ in two, leaving Nir's deactivation patchset targeted for 3.2.5 with this bug and moving my refresh patchset to a new bug targeted at 3.3 or 3.3.z.
Nir, would this be acceptable?
 http://gerrit.ovirt.org/#/c/4220/ - One shot prepare
(In reply to Lee Yarwood from comment #22)
> (In reply to Nir Soffer from comment #19)
> Nir, would this be acceptable?
Yes - for fixing the issue of lvs auto-activated during boot, patch http://gerrit.ovirt.org/#/c/21291 is enough.
verified using is29
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.