Description of problem: During a live storage migration the BLOCK storage runs out of space even though sufficient space was available Version-Release number of selected component (if applicable): vdsm-4.16.12.1-3.el7ev.x86_64 rhevm-3.5.1-0.2.el6ev.noarch v3.5.1 vt14.1 with Libvirt from: http://download.devel.redhat.com/brewroot/packages/libvirt/1.2.8/16.el7_1.2/x86_64/ How reproducible: Ran this once Steps to Reproduce: 1. Create VM with 4 disks(Block thin, block preallocated, nfs thin and nfs preallocated) 2. Start VM 3. Create 4 snapshots consecutively 4. VM-->Disks-->Select all disks move to new storage domains (all with more that sufficient space) Both block domains are reported to be out of disk space with exception during extend Actual results: Storage domains ran out of storage space Expected results: Storage domains should not have run out of storage space Additional info: 2015-Mar-30, 14:52 Engine.log ----------------- 2015-03-30 14:52:28,491 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-2) [602dbf9c] Correlation ID: 602dbf9c, Job ID: 96238cc9-b930-493c-a416-70728ef5c57c, Call Stack: null, Custom Event ID: -1, Message: User admin@internal moving disk vm_test3_Disk1 to domain block_vnx. 2015-03-30 14:52:28,702 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-2) [11a04d29] Correlation ID: 11a04d29, Job ID: 6a10219d-a5f0-46a6-976e-d0d753b38035, Call Stack: null, Custom Event ID: -1, Message: User admin@internal moving disk vm_test3_Disk2 to domain block_vnx. 2015-03-30 14:52:28,868 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-2) [5c61abea] Correlation ID: 5c61abea, Job ID: 6cbb6083-1094-4e55-ac44-e36f649ab027, Call Stack: null, Custom Event ID: -1, Message: User admin@internal moving disk vm_test3_Disk3 to domain nfs. 2015-03-30 14:52:29,235 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-2) [50e69945] Correlation ID: 50e69945, Job ID: f87205f3-bd7c-4aa6-a039-c43a1455211a, Call Stack: null, Custom Event ID: -1, Message: User admin@internal moving disk vm_test3_Disk4 to domain nfs. . . . 2015-03-30 14:55:13,984 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-41) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Critical, Low disk space. block_20g domain has 1 GB of free space . . . 2015-03-30 15:02:18,047 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-93) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Critical, Low disk space. block_vnx domain has 3 GB of free space vdsm.log ----------------------------------------- Thread-37::DEBUG::2015-03-30 14:56:35,050::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n452 bytes (452 B) copied, 0.000344427 s, 1.3 MB/s\n'; <rc> = 0 6520dc1f-fe8d-4f97-ba82-d84363a4ad81::DEBUG::2015-03-30 14:56:35,358::lvm::301::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n Insufficient free space: 16 ex tents needed, but only 9 available\n'; <rc> = 5 6520dc1f-fe8d-4f97-ba82-d84363a4ad81::ERROR::2015-03-30 14:56:35,358::storage_mailbox::172::Storage.SPM.Messages.Extend::(processRequest) processRequest: Exception caught while trying to extend volume: e3e81e70-4309-4eb6-83bf-6c6d69585c1c in domain: e2b109f3-d36b-4696-9ee9-99e4fb7d1fd5
Created attachment 1008533 [details] server, engine and vdsm logs
Isn't this a dup of the extend_lv issue?
(In reply to Kevin Alon Goldblatt from comment #0) > Description of problem: > During a live storage migration the BLOCK storage runs out of space even > though sufficient space was available > > Version-Release number of selected component (if applicable): > vdsm-4.16.12.1-3.el7ev.x86_64 > rhevm-3.5.1-0.2.el6ev.noarch > v3.5.1 vt14.1 with Libvirt from: > http://download.devel.redhat.com/brewroot/packages/libvirt/1.2.8/16.el7_1.2/ > x86_64/ > > > How reproducible: > Ran this once > > Steps to Reproduce: > 1. Create VM with 4 disks(Block thin, block preallocated, nfs thin and nfs > preallocated) > 2. Start VM > 3. Create 4 snapshots consecutively > 4. VM-->Disks-->Select all disks move to new storage domains (all with more > that sufficient space) > Both block domains are reported to be out of disk space with exception > during extend What's the /minimal/ reproducer for this scenario?
Adam, can this be related to the bug you had with the endless extend that fills up the space in the domain?
(In reply to Allon Mureinik from comment #3) > (In reply to Kevin Alon Goldblatt from comment #0) > > Description of problem: > > During a live storage migration the BLOCK storage runs out of space even > > though sufficient space was available > > > > Version-Release number of selected component (if applicable): > > vdsm-4.16.12.1-3.el7ev.x86_64 > > rhevm-3.5.1-0.2.el6ev.noarch > > v3.5.1 vt14.1 with Libvirt from: > > http://download.devel.redhat.com/brewroot/packages/libvirt/1.2.8/16.el7_1.2/ > > x86_64/ > > > > > > How reproducible: > > Ran this once > > > > Steps to Reproduce: > > 1. Create VM with 4 disks(Block thin, block preallocated, nfs thin and nfs > > preallocated) > > 2. Start VM > > 3. Create 4 snapshots consecutively > > 4. VM-->Disks-->Select all disks move to new storage domains (all with more > > that sufficient space) > > Both block domains are reported to be out of disk space with exception > > during extend > What's the /minimal/ reproducer for this scenario? Create VM with with 2 block disks preallocated Start VM Create snapshot Move both disks to another storage domain - all space is used up
Kevin, what libvirt-python version do you have?
(In reply to Allon Mureinik from comment #6) > Kevin, what libvirt-python version do you have? Libvirts from: http://download.devel.redhat.com/brewroot/packages/libvirt/1.2.8/16.el7_1.2/x86_64/ [root@nott-vds1 ~]# rpm -qa libvirt* libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.2.x86_64 libvirt-lock-sanlock-1.2.8-16.el7_1.2.x86_64 libvirt-python-1.2.8-7.el7.x86_64 libvirt-client-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-driver-network-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-driver-qemu-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-driver-interface-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-driver-storage-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-driver-secret-1.2.8-16.el7_1.2.x86_64 libvirt-daemon-kvm-1.2.8-16.el7_1.2.x86_64
Kevin, please share the output of this command: python -c "import libvirt; print libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY_DEV"
[root@nott-vds1 ~]# python -c "import libvirt; print libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY_DEV" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: 'module' object has no attribute 'VIR_DOMAIN_BLOCK_REBASE_COPY_DEV' Following offline conversation with Nir, the libvirt-python is too old.
Vdsm must require the libvirt-python version providing libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY_DEV Without libvirt >= 1.2.8 and without the required libvirt-python, live storage migration cause the disk type to change from block to file, breaking disk extending logic.
*** This bug has been marked as a duplicate of bug 1196049 ***