Bug 1339287

Summary: REST API vmpool increase won't join domain
Product: Red Hat Enterprise Virtualization Manager Reporter: Jason <jbryant>
Component: ovirt-engineAssignee: Shahar Havivi <shavivi>
Status: CLOSED ERRATA QA Contact: sefi litmanovich <slitmano>
Severity: high Docs Contact:
Priority: high    
Version: 3.6.5CC: dornelas, gklein, gscott, jbryant, juan.hernandez, lsurette, mavital, melewis, mgoldboi, michal.skrivanek, mkalinin, rbalakri, Rhev-m-bugs, shavivi, srevivo, tjelinek, ykaul
Target Milestone: ovirt-4.0.0-rcKeywords: ZStream
Target Release: 4.0.0Flags: mavital: needinfo+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: 4.0.0-12 Doc Type: Bug Fix
Doc Text:
Previously, when a virtual machine was added to an existing virtual machine pool via the REST API the virtual machines did not get the correct initialized parameters using sysprep or cloud-init. Now, this has been corrected and the virtual machines will get the correct initialized parameters using sysprep or cloud-init.
Story Points: ---
Clone Of:
: 1342389 (view as bug list) Environment:
Last Closed: 2016-08-23 20:40:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1342389    

Comment 1 Derrick Ornelas 2016-05-24 22:46:37 UTC
I've reproduced this on 3.6.5

1.  Create initial pool with X number of VMs

2016-05-24 17:51:35,382 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-21) [2118d83d] Correlation ID: 26098568, Job ID: a0953d09-c2ac-48ff-be04-b8748c098f6b, Call Stack: null, Custom Event ID: -1, Message: VM pool3-1 creation was initiated by admin@internal.

2016-05-24 17:51:36,804 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-21) [1d4c42dd] Correlation ID: 5875c466, Job ID: a0953d09-c2ac-48ff-be04-b8748c098f6b, Call Stack: null, Custom Event ID: -1, Message: VM pool3-2 creation was initiated by admin@internal.

2016-05-24 17:51:38,274 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-21) [273e790d] Correlation ID: 1e695d32, Job ID: a0953d09-c2ac-48ff-be04-b8748c098f6b, Call Stack: null, Custom Event ID: -1, Message: VM pool3-3 creation was initiated by admin@internal.



2.  Using UI, extend pool by X+1

2016-05-24 17:52:39,047 INFO  [org.ovirt.engine.core.bll.UpdateVmPoolWithVmsCommand] (org.ovirt.thread.pool-6-thread-40) [4cc4a268] Running command: UpdateVmPoolWithVmsCommand internal: false. Entities affected :  ID: 843b40e0-5af8-409f-b5a5-85f3e14d2d08 Type: VmPoolAction group EDIT_VM_POOL_CONFIGURATION with role type USER
2016-05-24 17:52:39,052 INFO  [org.ovirt.engine.core.bll.UpdateVmPoolWithVmsCommand] (org.ovirt.thread.pool-6-thread-40) [4cc4a268] Lock freed to object 'EngineLock:{exclusiveLocks='[843b40e0-5af8-409f-b5a5-85f3e14d2d08=<VM_POOL, ACTION_TYPE_FAILED_VM_POOL_IS_BEING_UPDATED$VmPoolName pool3>]', sharedLocks='null'}'
2016-05-24 17:52:39,095 INFO  [org.ovirt.engine.core.bll.AddVmAndAttachToPoolCommand] (org.ovirt.thread.pool-6-thread-40) [19b6821d] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[843b40e0-5af8-409f-b5a5-85f3e14d2d08=<VM_POOL, ACTION_TYPE_FAILED_VM_IS_BEING_CREATED_AND_ATTACHED_TO_POOL$VmPoolName pool3>]'}'
2016-05-24 17:52:39,153 INFO  [org.ovirt.engine.core.bll.AddVmAndAttachToPoolCommand] (org.ovirt.thread.pool-6-thread-40) [19b6821d] Running command: AddVmAndAttachToPoolCommand internal: true. Entities affected :  ID: 00000002-0002-0002-0002-00000000036d Type: VdsGroups,  ID: eeffc068-93c7-4b8f-bed4-f5e76b197c5e Type: VmTemplate,  ID: 8c7cd146-cbed-4db2-8665-6757343155b5 Type: StorageAction group CREATE_DISK with role type USER
2016-05-24 17:52:39,172 INFO  [org.ovirt.engine.core.bll.AddVmCommand] (org.ovirt.thread.pool-6-thread-40) [57627e2b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[pool3-4=<VM_NAME, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[843b40e0-5af8-409f-b5a5-85f3e14d2d08=<VM_POOL, ACTION_TYPE_FAILED_VM_POOL_IS_USED_FOR_CREATE_VM$VmName pool3-4>, 90b591d7-eea7-465b-9c81-47ded3a68792=<DISK, ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName pool3-4>, eeffc068-93c7-4b8f-bed4-f5e76b197c5e=<TEMPLATE, ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName pool3-4>]'}'
2016-05-24 17:52:39,230 INFO  [org.ovirt.engine.core.bll.AddVmCommand] (org.ovirt.thread.pool-6-thread-40) [57627e2b] Running command: AddVmCommand internal: true. Entities affected :  ID: 00000002-0002-0002-0002-00000000036d Type: VdsGroupsAction group CREATE_VM with role type USER,  ID: eeffc068-93c7-4b8f-bed4-f5e76b197c5e Type: VmTemplateAction group CREATE_VM with role type USER,  ID: 8c7cd146-cbed-4db2-8665-6757343155b5 Type: StorageAction group CREATE_DISK with role type USER


3.  Ensure new pool VM works with sysprep

2016-05-24 17:53:18,309 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVmFromSysPrepVDSCommand] (org.ovirt.thread.pool-6-thread-42) [42311f2] START, CreateVmFromSysPrepVDSCommand(HostName = rhevh-20, CreateVmVDSCommandParameters:{runAsync='true', hostId='402c6fe9-d9e2-4992-9926-ee8aa60eebd9', vmId='5ae264d0-4750-4929-affe-9f3fff7971c3', vm='VM [pool3-4]'}), log id: 5bf60b5e


4.  Using API, extend pool again

curl -k -u admin@internal:password -X PUT -d "<vmpool><size>6</size></vmpool>" https://rhev-m.example.com/ovirt-engine/api/vmpools/843b40e0-5af8-409f-b5a5-85f3e14d2d08 -H "Content-type: application/xml" 

2016-05-24 17:56:48,547 INFO  [org.ovirt.engine.core.bll.UpdateVmPoolWithVmsCommand] (ajp-/127.0.0.1:8702-2) [5a2c2d31] Running command: UpdateVmPoolWithVmsCommand internal: false. Entities affected :  ID: 843b40e0-5af8-409f-b5a5-85f3e14d2d08 Type: VmPoolAction group EDIT_VM_POOL_CONFIGURATION with role type USER
2016-05-24 17:56:48,568 INFO  [org.ovirt.engine.core.bll.UpdateVmPoolWithVmsCommand] (ajp-/127.0.0.1:8702-2) [5a2c2d31] Lock freed to object 'EngineLock:{exclusiveLocks='[843b40e0-5af8-409f-b5a5-85f3e14d2d08=<VM_POOL, ACTION_TYPE_FAILED_VM_POOL_IS_BEING_UPDATED$VmPoolName pool3>]', sharedLocks='null'}'
2016-05-24 17:56:48,838 INFO  [org.ovirt.engine.core.bll.AddVmAndAttachToPoolCommand] (ajp-/127.0.0.1:8702-2) [3239fa21] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[843b40e0-5af8-409f-b5a5-85f3e14d2d08=<VM_POOL, ACTION_TYPE_FAILED_VM_IS_BEING_CREATED_AND_ATTACHED_TO_POOL$VmPoolName pool3>]'}'
2016-05-24 17:56:48,913 INFO  [org.ovirt.engine.core.bll.AddVmAndAttachToPoolCommand] (ajp-/127.0.0.1:8702-2) [3239fa21] Running command: AddVmAndAttachToPoolCommand internal: true. Entities affected :  ID: 00000002-0002-0002-0002-00000000036d Type: VdsGroups,  ID: eeffc068-93c7-4b8f-bed4-f5e76b197c5e Type: VmTemplate,  ID: 8c7cd146-cbed-4db2-8665-6757343155b5 Type: StorageAction group CREATE_DISK with role type USER
2016-05-24 17:56:48,939 INFO  [org.ovirt.engine.core.bll.AddVmCommand] (ajp-/127.0.0.1:8702-2) [61ab4ff8] Lock Acquired to object 'EngineLock:{exclusiveLocks='[pool3-5=<VM_NAME, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[843b40e0-5af8-409f-b5a5-85f3e14d2d08=<VM_POOL, ACTION_TYPE_FAILED_VM_POOL_IS_USED_FOR_CREATE_VM$VmName pool3-5>, 90b591d7-eea7-465b-9c81-47ded3a68792=<DISK, ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName pool3-5>, eeffc068-93c7-4b8f-bed4-f5e76b197c5e=<TEMPLATE, ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName pool3-5>]'}'
2016-05-24 17:56:49,033 INFO  [org.ovirt.engine.core.bll.AddVmCommand] (ajp-/127.0.0.1:8702-2) [61ab4ff8] Running command: AddVmCommand internal: true. Entities affected :  ID: 00000002-0002-0002-0002-00000000036d Type: VdsGroupsAction group CREATE_VM with role type USER,  ID: eeffc068-93c7-4b8f-bed4-f5e76b197c5e Type: VmTemplateAction group CREATE_VM with role type USER,  ID: 8c7cd146-cbed-4db2-8665-6757343155b5 Type: StorageAction group CREATE_DISK with role type USER


5.  Test new VM

2016-05-24 17:57:19,624 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-6-thread-40) [55295b7c] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='402c6fe9-d9e2-4992-9926-ee8aa60eebd9', vmId='eb913b47-8e32-4ec2-a963-43d490833d79', vm='VM [pool3-5]'}), log id: 138395e9


Sysprep payload is not presented to pool VMs that are created when pool size is increased via API call

Comment 2 Juan Hernández 2016-06-01 15:33:09 UTC
Shouldn't this be targeted to 3.6.8?

Comment 4 Shahar Havivi 2016-06-02 08:08:35 UTC
(In reply to Juan Hernández from comment #2)
> Shouldn't this be targeted to 3.6.8?

Yes,
I am talking with Moran...

Comment 6 Greg Scott 2016-06-02 13:49:50 UTC
I was just getting ready to ask if we could target this for 3.6.7 and noticed Moran is way ahead of me.  Thank you thank you thank you!

Comment 8 sefi litmanovich 2016-06-05 15:57:24 UTC
If I understand correct the bug was that a new added vm from API doesn't inherit the initialization parameters.
I tried it out on ovirt-engine-4.0.0.2-0.1.el7ev.noarch (rc) and there's a difference between 2 scenarios (which makes sense to me when I look at the patch):

1) (the one solved by the patch):

a. Create a template - set some initialization parameters on template.
b. Create pool with X vms - all initialized according to values on template.
c. Add 1 more vm to the pool with webadmin - vm is added with correct initialization values and starts accordingly.
d. Add 2 more vms to the pool via API - vms are added with correct initialization values and starts accordingly.

This one works and can be verified.

2) In this case only the original vms will get the initialization params values and the new vms (both from webadmin and API) won't:

a. Create a template - DO NOT set any initialization parameters.
b. Create a pool with X vms from the template and add the initialization params values on the creation of the pool
c. Add 1 more vm to the pool with webadmin - doesn't get initialization values
d. Add 2 more vms to the pool via API - doesn't get initialization values.

Let me know if I should verify this bug (based on flow 1) and open a new bug (for flow 2) or please re assign the bug

Comment 9 Greg Scott 2016-06-05 20:41:31 UTC
Some attributes belong to the pool - not the template - such as the name of the Active Directory domain and OU for the pooled Windows VMs to join.  Those attributes should be enforced with all VMs in the pool, whether created by API call or UI.  If I'm reading scenario 2 above correctly, that seems to break desired behavior.

Comment 10 Shahar Havivi 2016-06-06 07:10:28 UTC
(In reply to sefi litmanovich from comment #8)
Sefi the bug is simple to reproduce
via the UI: 
create a pool via template with sysprep,

increase the number of VMs via the UI and check that the new VMs have sysprep (works via the UI)

increase the number of VMs via the REST and check that the new VMs have sysprep (doesn't work via REST, the patch fix it)

Comment 12 sefi litmanovich 2016-06-07 12:52:50 UTC
After some deliberations, this bug should be re assigned, the fix handles cases where the vm pool is created based on the template only, meaning that init params are fetched from template.
In case the vm pool is created from a template with no init params values, and these are added to that pool explicitly then this bug will re produce.

Comment 13 Yaniv Kaul 2016-06-07 13:06:20 UTC
(In reply to sefi litmanovich from comment #12)
> After some deliberations, this bug should be re assigned, the fix handles
> cases where the vm pool is created based on the template only, meaning that
> init params are fetched from template.
> In case the vm pool is created from a template with no init params values,
> and these are added to that pool explicitly then this bug will re produce.

I'd argue that this is a slightly different use case which should be covered by a different bug.

Comment 14 Greg Scott 2016-06-07 13:49:07 UTC
There are some pieces that cannot be done in the template.  With Windows VMs you have to do Active Directory domain membership as part of pool creation - not as part of the template - because every system in an Active Directory domain needs a unique hostname and is assigned a unique SID at the time it joins.

So when you add new VMs to a pool, they need all those pool attributes you set up at pool creation, so all VMs in the pool end up with a proper unattend.xml to drive mini-setup at firstboot.

So the template has the sealed virtual machine and pool settings give each VM what it needs to operate.

Comment 15 Greg Scott 2016-06-08 17:12:36 UTC
I'm getting more customer feedback on the impact of all this.  And after re-reading the comments, here's the desired behavior.  Note the behavior should be the same, whether triggered in the UI or API.

1.  Create a Windows VM, seal it with Sysprep, and make a template.
2.  Set up a pool based on the template.  In the pool, specify AD domain and OU to join.
3.  Make a bunch of VMs.  They should all have an unattend.xml that meets the requirements above, based on settings in the pool.
4.  Extend the pool later on when we need more VMs.  Unattend.xml for these new VMs should be generated the same way as the original one.  (The bug is,  after extending the pool via API, the new unattend.xml is apparently virgin.)

The patch with 3.6.7 makes it work this way, right?

Comment 16 Shahar Havivi 2016-06-09 06:40:42 UTC
(In reply to Greg Scott from comment #15)
the patch that merged fixing the template sysprep parameters which where not copy to the newly added vms but not the pool AD and OU that you added after you sealed the template,
The new patch (still not merged) is fixing the issue with the pool as well.

Comment 17 Michal Skrivanek 2016-06-09 06:44:41 UTC
(In reply to Greg Scott from comment #15) 
> The patch with 3.6.7 makes it work this way, right?


This bug is tracking 4.0 changes, please raise and track your backport request in the zstream clone bug 1339287

Comment 18 Greg Scott 2016-06-09 12:58:19 UTC
Michal, that link above points right back here.  We really really really need this fixed ASAP in 3.6.z - if there's a zstream clone of this bug somewhere, I'll be happy to go in there too.  How do I find it?

thanks

- Greg

Comment 19 Shahar Havivi 2016-06-09 13:08:44 UTC
Patch is ready and easy to merge if needed.

Comment 20 Greg Scott 2016-06-09 13:47:28 UTC
OK thanks.  And I'm a dork - I just noticed the other bugs are in a table right at the top of this one.  It's definitely needed in 3.6.  I'll put in a comment on those.

- Greg

Comment 21 Greg Scott 2016-06-09 14:05:03 UTC
I'm still a dork.  The links in that table at the top of this bug have valuable info and explain some things, but they're not the bz clones I was looking for.

But I think I found the 3.6.z clone of this bug at https://bugzilla.redhat.com/show_bug.cgi?id=1342389

If there's still a way to merge everything to fix this bug into 3.6.7, if I get a vote, I vote yes.

thanks

- Greg

Comment 24 sefi litmanovich 2016-06-23 16:56:02 UTC
Verified with rhevm-4.0.0.6-0.1.el7ev.noarch.

Checked increase vm pool that was created from template with init params, and also vm pool that had init params set upon creation and not from template.
for each I checked both increase vm pool from UI and from API v3 and API v4.
In all cases the new vms had inherited the correct init paramas values.

Comment 25 Greg Scott 2016-07-18 12:46:17 UTC
This should hopefully clean up the needsinfo flag.

Comment 27 errata-xmlrpc 2016-08-23 20:40:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-1743.html