Bug 1430709 - [VMWare] Provision fails if we have common network named DPortGroup
Summary: [VMWare] Provision fails if we have common network named DPortGroup
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.8.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: GA
: 5.8.0
Assignee: Adam Grare
QA Contact: Leo Khomenko
Whiteboard: vmware:provider
Depends On:
TreeView+ depends on / blocked
Reported: 2017-03-09 11:41 UTC by Leo Khomenko
Modified: 2017-05-31 14:41 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-05-31 14:41:26 UTC
Category: ---
Cloudforms Team: VMware
Target Upstream Version:

Attachments (Terms of Use)
Vsphere network config (9.12 KB, image/png)
2017-03-09 11:41 UTC, Leo Khomenko
no flags Details

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:1367 normal SHIPPED_LIVE Moderate: CFME 5.8.0 security, bug, and enhancement update 2017-05-31 18:16:03 UTC

Description Leo Khomenko 2017-03-09 11:41:23 UTC
Created attachment 1261516 [details]
Vsphere network config

Description of problem: in case we have a common network named DPortGroup in our VC(check attachment for details) it isn't shown is select list during VM provision, but it fails provision.

[----] I, [2017-03-09T06:26:48.531654 #49845:f193fc]  INFO -- : Q-task_id([miq_provision_21]) <AutomationEngine> Calling Create Notification type: automate_user_error subject type: MiqRequest id: 21 options: {:message=>"VM Provision Error: [EVM] VM [test-provt-ekbp] Step [CheckProvisioned] Status [[MiqException::MiqProvisionError]: Port group [DPortGroup] is not available on target] Message [[MiqException::MiqProvisionError]: Port group [DPortGroup] is not available on target] "}

Version-Release number of selected component (if applicable):VC6 + cfme or 5.6.4.*

How reproducible:100%

Steps to Reproduce:
1.Prepare environment - add DPortGroup Newtork 
2.Try to provision VM to the DPortGroup(DSwitch) network

Actual results:Provision fails

Expected results:Provision should either succeed or we should see 2 available DPortSwitch networks in the list

Additional info:

Comment 3 Adam Grare 2017-03-13 15:08:57 UTC
1. Why would you do this???? :)
2. Great find, this is due to how we store VLANs in the provision_workflow and how we handle DVPortGroups in a two step fashion.

First we get a list of all networks and dvportgroups and use the name for the key (this leads to only one entry for your "DPortGroup" LAN): https://github.com/ManageIQ/manageiq/blob/master/app/models/miq_provision_virt_workflow.rb#L221

Second we modify the vlan hash so that they key is "dvs_DPortGroup" and the name is "DPortGroup (DSwitch)" and delete the old key: https://github.com/ManageIQ/manageiq-providers-vmware/blob/master/app/models/manageiq/providers/vmware/infra_manager/provision_workflow.rb#L141

Because it deletes the old key you end up with just the DVS lan entry.

Comment 5 CFME Bot 2017-03-20 19:31:13 UTC
New commit detected on ManageIQ/manageiq/master:

commit 8eb552ca8eaf886d3c2e46f63b8674ef8db18b3d
Author:     Adam Grare <agrare@redhat.com>
AuthorDate: Mon Mar 13 11:13:48 2017 -0400
Commit:     Adam Grare <agrare@redhat.com>
CommitDate: Mon Mar 20 10:27:46 2017 -0400

    Set dvpg keys using dvs_ in provision workflow
    Keep DVPortGroups with the same name as a standard Network from
    colliding in the vlans hash

 app/models/miq_provision_virt_workflow.rb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comment 7 Leo Khomenko 2017-03-24 21:16:40 UTC
verified on but found new bug with auto_placement

Comment 9 errata-xmlrpc 2017-05-31 14:41:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.