Bug 1418145 - [RFE] VDSM Hook for use of local storage of host
Summary: [RFE] VDSM Hook for use of local storage of host
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-4.1.1
: ---
Assignee: Fred Rolland
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
: 1431424 (view as bug list)
Depends On:
Blocks: 1489267
TreeView+ depends on / blocked
 
Reported: 2017-02-01 01:40 UTC by Ashton Davis
Modified: 2021-08-30 12:26 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
The 'localdisk' hook adds the ability to use fast local storage instead of shared storage, while using shared storage for managing virtual machine templates. Currently, a user has to choose between fast local storage where nothing is shared with other hosts, or shared storage where everything is shared between the hosts and fast local storage cannot be used. This update mixes local and shared storage. The 'localdisk' hook works as follows: 1) A user will create a virtual machine normally on shared storage of any type. To use the virtual machine with local storage, the user will need to pin the virtual machine to a certain host, and enable the localdisk hook. 2) When starting the virtual machine on the pinned host, the localdisk hook will copy the virtual machine disks from shared storage into the host's local storage, and modify the disk path to use the local copy of the disk. 3) The original disk may be a single volume or a chain of volumes based on a template. The local copy is a raw preallocated volume using a LVM logical volume on the special "ovirt-local" volume group. To change storage on a virtual machine using local storage, the localdisk hook must be disabled. Warning: - Virtual machines using local disk must be pinned to a specific host and cannot be migrated between hosts. - No storage operations on a virtual machines using local disks are allowed. For example, creating/deleting snapshots, moving disks, creating templates from the virtual machine. - The virtual machine disks on the shared storage should not be deleted, and the storage domain needs to be active and accessible.
Clone Of:
Environment:
Last Closed:
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-43235 0 None None None 2021-08-30 12:26:41 UTC
Red Hat Knowledge Base (Solution) 3021561 0 None None None 2017-05-03 17:18:14 UTC
Red Hat Product Errata RHEA-2017:0997 0 normal SHIPPED_LIVE Red Hat Virtualization Manager (ovirt-engine) 4.1 GA 2017-04-18 20:11:26 UTC
oVirt gerrit 71711 0 ovirt-4.0 MERGED after_disk_prepare: Add new hook point 2020-07-02 15:36:01 UTC
oVirt gerrit 71712 0 ovirt-4.0 MERGED hooks: Add localdisk hook 2020-07-02 15:36:01 UTC
oVirt gerrit 71926 0 ovirt-4.0 MERGED hooks: support thin LVM on localdisk 2020-07-02 15:36:01 UTC
oVirt gerrit 72098 0 ovirt-4.0 MERGED localdisk: Rename lvm-thin to lvmthin 2020-07-02 15:36:01 UTC
oVirt gerrit 72099 0 ovirt-4.0 MERGED localdisk: Use sparse images for lvmthin backend 2020-07-02 15:36:01 UTC
oVirt gerrit 73918 0 ovirt-4.1 MERGED after_disk_prepare: Add new hook point 2020-07-02 15:36:01 UTC
oVirt gerrit 73919 0 ovirt-4.1 MERGED hooks: Add localdisk hook 2020-07-02 15:36:00 UTC
oVirt gerrit 73920 0 ovirt-4.1 MERGED hooks: support thin LVM on localdisk 2020-07-02 15:36:00 UTC
oVirt gerrit 73921 0 ovirt-4.1 MERGED localdisk: Rename lvm-thin to lvmthin 2020-07-02 15:36:00 UTC
oVirt gerrit 73922 0 ovirt-4.1 MERGED localdisk: Use sparse images for lvmthin backend 2020-07-02 15:36:00 UTC
oVirt gerrit 73923 0 ovirt-4.1 MERGED hooks: prevent VM migrate with local disk hook 2020-07-02 15:36:00 UTC
oVirt gerrit 73927 0 ovirt-4.0 MERGED hooks: prevent VM migrate with local disk hook 2020-07-02 15:36:00 UTC

Description Ashton Davis 2017-02-01 01:40:43 UTC
Reason for request
---
Currently local and shared storage cannot be combined in RHV, meaning local storage cannot be used in a hypervisor cluster. We have received a customer use case in which this is a necessary functionality, and need a VDSM hook to work around this until a feature can be implemented into RHV.

VDSM Hook Sample Workflow
---
1) VM is created
   - VM is pinned to host
   - "Dummy" NFS volume is defined to satisfy VM creation requirements
2) Hook is triggered by before_vm_start
3) If custom property is set, hook creates properly sized LV, then replaces NFS disk  with local LV disk in domain XML file.
4) If template install, VM is started and image is transferred to local VM disk. If not, VM is started and install via PXE begins.


Functional Requirements
---
The solution must satisfy the abilities to:
- Run VMs on hypervisor-local storage while in a hypervisor cluster.
- Create VMs via PXE.
- Create VMs via template.
- Resize the local disk.
- Add additional disks to a VM (on local storage).
- Install to local or non-local storage (triggered by custom property on VM)
- Remove the local volume after VM deletion (via after_vm_destroy trigger)
- Persist across power off/on (power off/on should reconnect the VM to the local disk)
- Be thin provisioned on the local disk (using LVM thin provisioning)


Additional information
---
- Typical template size for the customer is a 50GB thin-provisioned template with 2GB of data
- It is understood that the VM must always be pinned to host, that local and NFS disks cannot be mixed in one VM.
- A stall at VM creation time for template-based provisioning is acceptable to the customer
- An actual NFS solution for serving templates is required and will be provided by the customer
- The customer understands the following:
  - The risks of thin provisioning
  - That local VMs cannot be migrated
  - That local VMs will be lost in the event of a disk failure
  - That VMs must be pinned

Comment 2 Ashton Davis 2017-02-07 01:17:04 UTC
Regarding point:
- Remove the local volume after VM deletion (via after_vm_destroy trigger)


I may have misunderstood how the after_vm_destroy trigger works.
If after_vm_destroy triggers every time a VM is shut down, that is not the right trigger. I thought "destroy" meant delete, but have been informed that "destroy" may be a virsh term meaning VM shutdown.

The requirement on that bullet point is that the hook should clean up the local disk after a VM has been deleted. after_vm_destroy triggers at every shutdown, that is not the right trigger to use.

Comment 3 Yaniv Lavi 2017-03-12 15:12:31 UTC
*** Bug 1431424 has been marked as a duplicate of this bug. ***

Comment 5 Kevin Alon Goldblatt 2017-03-19 12:40:23 UTC
Verified with the following code:
----------------------------------------------
ovirt-engine-4.1.1.3-0.1.el7.noarch
rhevm-4.1.1.3-0.1.el7.noarch
vdsm-4.19.8-6.gitb79e2da.el7.centos.x86_64
vdsm-python-4.19.8-6.gitb79e2da.el7.centos.noarch
vdsm-hook-vmfex-dev-4.19.8-6.gitb79e2da.el7.centos.noarch
vdsm-api-4.19.8-6.gitb79e2da.el7.centos.noarch
vdsm-xmlrpc-4.19.8-6.gitb79e2da.el7.centos.noarch
vdsm-jsonrpc-4.19.8-6.gitb79e2da.el7.centos.noarch
vdsm-4.19.8-6.gitb79e2da.el7.centos.x86_64
vdsm-yajsonrpc-4.19.8-6.gitb79e2da.el7.centos.noarch
vdsm-hook-localdisk-4.19.8-6.gitb79e2da.el7.centos.noarch


Verified with the following scenarios:
----------------------------------------------

Case 1
Create VM from thin LVM
Create VM + Disk - no template
Install from CD
Wget to all disks
Reboot
VM start and verify running + new wget

Case 2
Create VM + Disk - from template - thin LV
Run
Wget to all disks
Reboot
VM start and verify running + new wget

Case 3
Create VM from  LV
Create VM + Disk - no template
Install from CD
Wget to all disks
Reboot
VM start and verify running + new wget

Case 4
Lvextend of exiting disk
Reboot after lvextend
New data after lvextend


Moving to VERIFIED!

Comment 6 Allon Mureinik 2017-04-03 22:47:19 UTC
Fred, can you please add some doctext here?

Comment 7 Fred Rolland 2017-04-06 09:45:14 UTC
README available here:

https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/localdisk/README


Note You need to log in before you can comment on or make changes to this bug.