Reason for request --- Currently local and shared storage cannot be combined in RHV, meaning local storage cannot be used in a hypervisor cluster. We have received a customer use case in which this is a necessary functionality, and need a VDSM hook to work around this until a feature can be implemented into RHV. VDSM Hook Sample Workflow --- 1) VM is created - VM is pinned to host - "Dummy" NFS volume is defined to satisfy VM creation requirements 2) Hook is triggered by before_vm_start 3) If custom property is set, hook creates properly sized LV, then replaces NFS disk with local LV disk in domain XML file. 4) If template install, VM is started and image is transferred to local VM disk. If not, VM is started and install via PXE begins. Functional Requirements --- The solution must satisfy the abilities to: - Run VMs on hypervisor-local storage while in a hypervisor cluster. - Create VMs via PXE. - Create VMs via template. - Resize the local disk. - Add additional disks to a VM (on local storage). - Install to local or non-local storage (triggered by custom property on VM) - Remove the local volume after VM deletion (via after_vm_destroy trigger) - Persist across power off/on (power off/on should reconnect the VM to the local disk) - Be thin provisioned on the local disk (using LVM thin provisioning) Additional information --- - Typical template size for the customer is a 50GB thin-provisioned template with 2GB of data - It is understood that the VM must always be pinned to host, that local and NFS disks cannot be mixed in one VM. - A stall at VM creation time for template-based provisioning is acceptable to the customer - An actual NFS solution for serving templates is required and will be provided by the customer - The customer understands the following: - The risks of thin provisioning - That local VMs cannot be migrated - That local VMs will be lost in the event of a disk failure - That VMs must be pinned
Regarding point: - Remove the local volume after VM deletion (via after_vm_destroy trigger) I may have misunderstood how the after_vm_destroy trigger works. If after_vm_destroy triggers every time a VM is shut down, that is not the right trigger. I thought "destroy" meant delete, but have been informed that "destroy" may be a virsh term meaning VM shutdown. The requirement on that bullet point is that the hook should clean up the local disk after a VM has been deleted. after_vm_destroy triggers at every shutdown, that is not the right trigger to use.
*** Bug 1431424 has been marked as a duplicate of this bug. ***
Verified with the following code: ---------------------------------------------- ovirt-engine-4.1.1.3-0.1.el7.noarch rhevm-4.1.1.3-0.1.el7.noarch vdsm-4.19.8-6.gitb79e2da.el7.centos.x86_64 vdsm-python-4.19.8-6.gitb79e2da.el7.centos.noarch vdsm-hook-vmfex-dev-4.19.8-6.gitb79e2da.el7.centos.noarch vdsm-api-4.19.8-6.gitb79e2da.el7.centos.noarch vdsm-xmlrpc-4.19.8-6.gitb79e2da.el7.centos.noarch vdsm-jsonrpc-4.19.8-6.gitb79e2da.el7.centos.noarch vdsm-4.19.8-6.gitb79e2da.el7.centos.x86_64 vdsm-yajsonrpc-4.19.8-6.gitb79e2da.el7.centos.noarch vdsm-hook-localdisk-4.19.8-6.gitb79e2da.el7.centos.noarch Verified with the following scenarios: ---------------------------------------------- Case 1 Create VM from thin LVM Create VM + Disk - no template Install from CD Wget to all disks Reboot VM start and verify running + new wget Case 2 Create VM + Disk - from template - thin LV Run Wget to all disks Reboot VM start and verify running + new wget Case 3 Create VM from LV Create VM + Disk - no template Install from CD Wget to all disks Reboot VM start and verify running + new wget Case 4 Lvextend of exiting disk Reboot after lvextend New data after lvextend Moving to VERIFIED!
Fred, can you please add some doctext here?
README available here: https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/localdisk/README