Bug 1635337
Summary: | [Downstream Clone] Cannot assign VM from VmPool: oVirt claims it's already attached but it's not | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Federico Sun <fsun> | ||||
Component: | ovirt-engine | Assignee: | Tomasz Barański <tbaransk> | ||||
Status: | CLOSED ERRATA | QA Contact: | Nisim Simsolo <nsimsolo> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 4.1.11 | CC: | abpatil, bugs, lsvaty, mavital, michal.skrivanek, nicolas, nobody, nsimsolo, pagranat, rbarry, Rhev-m-bugs, rik.theys, tbaransk, tjelinek | ||||
Target Milestone: | ovirt-4.3.5 | Keywords: | Rebase, ZStream | ||||
Target Release: | 4.3.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | ovirt-engine-4.3.3.1 | Doc Type: | If docs needed, set a value | ||||
Doc Text: |
This release ensures that virtual machines within a virtual machine pool can be attached to a user.
|
Story Points: | --- | ||||
Clone Of: | 1462236 | Environment: | |||||
Last Closed: | 2019-08-12 11:53:27 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1462236, 1700389 | ||||||
Bug Blocks: | |||||||
Attachments: |
|
Description
Federico Sun
2018-10-02 16:43:48 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both Created attachment 1551051 [details] engine.log Looks like the fix caused regression found in automation run in ovirt-engine-4.3.3.1-0.1.el7.noarch (100% reproducible ): Steps for reproduce: 1. Create pool with prestarted vm (I created pool with two vms , both prestarted) 2. try to allocate with rest API : POST https://{{host}}/ovirt-engine/api/vmpools/9e605316-cddf-4bd1-9f0e-e8f2f10aed1d/allocatevm body: <action> <async>false</async> <grace_period> <expiry>10</expiry> </grace_period> </action> brings error [Cannot allocate and run VM from VM-Pool. Related operation is currently in progress. Please try again later.]. Also such pool could be never removed from the engine with the same error. engine log attached remove such pool I succeeded only after ovirt-engine service restart (In reply to Polina from comment #6) > Created attachment 1551051 [details] > engine.log > > Looks like the fix caused regression found in automation run in > ovirt-engine-4.3.3.1-0.1.el7.noarch (100% reproducible ): > > Steps for reproduce: > 1. Create pool with prestarted vm (I created pool with two vms , both > prestarted) > 2. try to allocate with rest API : > > POST > https://{{host}}/ovirt-engine/api/vmpools/9e605316-cddf-4bd1-9f0e- > e8f2f10aed1d/allocatevm > > body: > > <action> > <async>false</async> > <grace_period> > <expiry>10</expiry> > </grace_period> > </action> > > brings error [Cannot allocate and run VM from VM-Pool. Related operation > is currently in progress. Please try again later.]. > Also such pool could be never removed from the engine with the same error. > > > engine log attached Back to ASSIGNED. Tomas, are we forgetting to release the locks? I'll look into it. I tried to reproduce this on a build that contains the patch, but I couldn't. The response: <action> <async>false</async> <grace_period> <expiry>10</expiry> </grace_period> <status>complete</status> <vm href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67" id="6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67"> <actions> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/detach" rel="detach"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/export" rel="export"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/ticket" rel="ticket"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/migrate" rel="migrate"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/cancelmigration" rel="cancelmigration"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/commitsnapshot" rel="commitsnapshot"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/clone" rel="clone"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/freezefilesystems" rel="freezefilesystems"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/logon" rel="logon"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/maintenance" rel="maintenance"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/previewsnapshot" rel="previewsnapshot"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/reordermacaddresses" rel="reordermacaddresses"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/thawfilesystems" rel="thawfilesystems"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/undosnapshot" rel="undosnapshot"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/reboot" rel="reboot"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/shutdown" rel="shutdown"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/start" rel="start"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/stop" rel="stop"/> <link href="/ovirt-engine/api/vms/6c9b1a2a-89c3-41e7-97c5-fcc6c6d30f67/suspend" rel="suspend"/> </actions> </vm> </action> I'm going to try it with a fresh build from master. The bug seems to be more tricky. I've installed a fresh engine from the master. I've created a template and a pool of 2 VMs, both prestarted. Then. 1. I run the allocatevm action from the API, got the following error: [Cannot allocate and run VM from VM-Pool. There are no available VMs in the VM-Pool.] 2. In the GUI the pool shows 2 running VMs. 3. Listing the pool form the API shows only 1 VM 4. Running the allocatevm action again gives the following error: [Cannot allocate and run VM from VM-Pool. Related operation is currently in progress. Please try again later.] Investigating... as requested opened new bug https://bugzilla.redhat.com/show_bug.cgi?id=1700389 instead of https://bugzilla.redhat.com/show_bug.cgi?id=1635337#c6 Verified: rhvm-4.3.3.6-0.1.el7 vdsm-4.30.13-1.el7ev.x86_64 libvirt-4.5.0-10.el7_6.7.x86_64 qemu-kvm-rhev-2.12.0-18.el7_6.4.x86_64 Verification scenario: 1. Create pool of 40 VMs from RHEL7.6 with qemu_ga template. set prestarted VMs to 0 and maximum number of VMs per user to 40. 2. From VM portal, using user with UserRole permission, run all VMs. 3. Verify all VMs are running and attached to the user. for example: engine=# select s.vm_name, d.status from vm_static s, vm_dynamic d where s.vm_guid = d.vm_guid and s.vm_name ilike '%vm-pool%' ORDER BY vm_name; vm_name | status ------------+-------- vm-pool-1 | 1 vm-pool-10 | 1 vm-pool-11 | 1 vm-pool-12 | 1 vm-pool-13 | 1 vm-pool-14 | 1 vm-pool-15 | 1 vm-pool-16 | 1 vm-pool-17 | 1 vm-pool-18 | 1 vm-pool-19 | 1 vm-pool-2 | 1 vm-pool-20 | 1 vm-pool-21 | 1 vm-pool-22 | 1 vm-pool-23 | 1 vm-pool-24 | 1 vm-pool-25 | 1 vm-pool-26 | 1 vm-pool-27 | 1 vm-pool-28 | 1 vm-pool-29 | 1 vm-pool-3 | 1 vm-pool-30 | 1 vm-pool-31 | 1 vm-pool-32 | 1 vm-pool-33 | 1 vm-pool-34 | 1 vm-pool-35 | 1 vm-pool-36 | 1 vm-pool-37 | 1 vm-pool-38 | 1 vm-pool-39 | 1 vm-pool-4 | 1 vm-pool-40 | 1 vm-pool-5 | 1 vm-pool-6 | 1 vm-pool-7 | 1 vm-pool-8 | 1 vm-pool-9 | 1 (40 rows) engine=# select s.vm_name, d.status, u.name from vm_static s, vm_dynamic d, permissions p, users u where s.vm_guid = d.vm_guid and u.user_id = p.ad_element_id and p.object_type_id = 2 and p.object_id = s.vm_guid and s.vm_name ilike '%vm-pool%' ORDER BY vm_name; vm_name | status | name ------------+--------+------- vm-pool-1 | 1 | user1 vm-pool-10 | 1 | user1 vm-pool-11 | 1 | user1 vm-pool-12 | 1 | user1 vm-pool-13 | 1 | user1 vm-pool-14 | 1 | user1 vm-pool-15 | 1 | user1 vm-pool-16 | 1 | user1 vm-pool-17 | 1 | user1 vm-pool-18 | 1 | user1 vm-pool-19 | 1 | user1 vm-pool-2 | 1 | user1 vm-pool-20 | 1 | user1 vm-pool-21 | 1 | user1 vm-pool-22 | 1 | user1 vm-pool-23 | 1 | user1 vm-pool-24 | 1 | user1 vm-pool-25 | 1 | user1 vm-pool-26 | 1 | user1 vm-pool-27 | 1 | user1 vm-pool-28 | 1 | user1 vm-pool-29 | 1 | user1 vm-pool-3 | 1 | user1 vm-pool-30 | 1 | user1 vm-pool-31 | 1 | user1 vm-pool-32 | 1 | user1 vm-pool-33 | 1 | user1 vm-pool-34 | 1 | user1 vm-pool-35 | 1 | user1 vm-pool-36 | 1 | user1 vm-pool-37 | 1 | user1 vm-pool-38 | 1 | user1 vm-pool-39 | 1 | user1 vm-pool-4 | 1 | user1 vm-pool-40 | 1 | user1 vm-pool-5 | 1 | user1 vm-pool-6 | 1 | user1 vm-pool-7 | 1 | user1 vm-pool-8 | 1 | user1 vm-pool-9 | 1 | user1 (40 rows) engine=# 4. Repeat step 1-3, this time create pool with 20 prestarted VMs, and attach all of the VMs to the same user (prestarted and not prestarted). 5. Repeat steps 1-3, this time create pool with 40 prestarted VMs. WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops re-targeting to 4.3.5 since it missed 4.3.4 errata but it was already fixed in 4.3.3. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:2431 sync2jira sync2jira |