Bug 1033123 - LVM logical volumes on FC SDs are activated automatically after hypervisor reboot
LVM logical volumes on FC SDs are activated automatically after hypervisor re...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.2.0
x86_64 Linux
urgent Severity urgent
: ---
: 3.2.5
Assigned To: Nir Soffer
Aharon Canan
storage
: Triaged, ZStream
Depends On: 1009812
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-21 10:24 EST by rhev-integ
Modified: 2016-02-10 12:36 EST (History)
17 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
When a hypervisor was rebooted all LVs which were part of a FC storage domain were automatically activated. This was cause some issues on occasion, as logical volumes should by activated only on request of the engine if needed (VM start), and deactivate immediately when the LVs are not needed (VM stop). This happened because when using FC storage, physical volumes are connected during boot, and vdsm. Logical volumes are auto-activated by both /etc/rc.sysinit and /etc/init.d/netfs startup scripts. These logical volumes did not pick changes done by the SPM on the storage, which could eventually lead to data corruption when a vm is trying to write to the logical volume with stale meta data. The fix checks all vdsm logical volumes during lvm bootstrap and deactivates them if possible. Special logical volumes are refreshed, since they are accessed early when connecting to storage pool, possibly before lvm bootstrap is done. Open logical volumes are skipped because it is assumed that they use correct meta data when opened.
Story Points: ---
Clone Of: 1009812
Environment:
Last Closed: 2013-12-18 08:58:49 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 540033 None None None Never
oVirt gerrit 19871 None None None Never
oVirt gerrit 21291 None None None Never
oVirt gerrit 21386 None None None Never
oVirt gerrit 21500 None None None Never
oVirt gerrit 21501 None None None Never

  None (edit)
Comment 2 Aharon Canan 2013-11-28 05:19:01 EST

verified using sf22

1. one host setup with FC storage domain
2. create VM with few disks
3, run VM and power it off
4. enter host to maintenance
5. restart host
6. all LV are not active (VM is still down)
7. add new disk - disk is not active
8, start VM - all disks are active.
Comment 4 errata-xmlrpc 2013-12-18 08:58:49 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1832.html

Note You need to log in before you can comment on or make changes to this bug.