Bug 1430709

Summary: [VMWare] Provision fails if we have common network named DPortGroup
Product: Red Hat CloudForms Management Engine Reporter: Leo Khomenko <lkhomenk>
Component: ProvidersAssignee: Adam Grare <agrare>
Status: CLOSED ERRATA QA Contact: Leo Khomenko <lkhomenk>
Severity: medium Docs Contact:
Priority: high    
Version: 5.8.0CC: anewman, cpelland, gblomqui, jfrey, jhardy, lkhomenk, obarenbo, simaishi
Target Milestone: GA   
Target Release: 5.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: vmware:provider
Fixed In Version: 5.8.0.7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-05-31 14:41:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: VMware Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Vsphere network config none

Description Leo Khomenko 2017-03-09 11:41:23 UTC
Created attachment 1261516 [details]
Vsphere network config

Description of problem: in case we have a common network named DPortGroup in our VC(check attachment for details) it isn't shown is select list during VM provision, but it fails provision.

[----] I, [2017-03-09T06:26:48.531654 #49845:f193fc]  INFO -- : Q-task_id([miq_provision_21]) <AutomationEngine> Calling Create Notification type: automate_user_error subject type: MiqRequest id: 21 options: {:message=>"VM Provision Error: [EVM] VM [test-provt-ekbp] Step [CheckProvisioned] Status [[MiqException::MiqProvisionError]: Port group [DPortGroup] is not available on target] Message [[MiqException::MiqProvisionError]: Port group [DPortGroup] is not available on target] "}



Version-Release number of selected component (if applicable):VC6 + cfme 5.8.0.4/0.3 or 5.6.4.*


How reproducible:100%


Steps to Reproduce:
1.Prepare environment - add DPortGroup Newtork 
2.Try to provision VM to the DPortGroup(DSwitch) network


Actual results:Provision fails


Expected results:Provision should either succeed or we should see 2 available DPortSwitch networks in the list


Additional info:

Comment 3 Adam Grare 2017-03-13 15:08:57 UTC
1. Why would you do this???? :)
2. Great find, this is due to how we store VLANs in the provision_workflow and how we handle DVPortGroups in a two step fashion.

First we get a list of all networks and dvportgroups and use the name for the key (this leads to only one entry for your "DPortGroup" LAN): https://github.com/ManageIQ/manageiq/blob/master/app/models/miq_provision_virt_workflow.rb#L221

Second we modify the vlan hash so that they key is "dvs_DPortGroup" and the name is "DPortGroup (DSwitch)" and delete the old key: https://github.com/ManageIQ/manageiq-providers-vmware/blob/master/app/models/manageiq/providers/vmware/infra_manager/provision_workflow.rb#L141

Because it deletes the old key you end up with just the DVS lan entry.

Comment 5 CFME Bot 2017-03-20 19:31:13 UTC
New commit detected on ManageIQ/manageiq/master:
https://github.com/ManageIQ/manageiq/commit/8eb552ca8eaf886d3c2e46f63b8674ef8db18b3d

commit 8eb552ca8eaf886d3c2e46f63b8674ef8db18b3d
Author:     Adam Grare <agrare>
AuthorDate: Mon Mar 13 11:13:48 2017 -0400
Commit:     Adam Grare <agrare>
CommitDate: Mon Mar 20 10:27:46 2017 -0400

    Set dvpg keys using dvs_ in provision workflow
    
    Keep DVPortGroups with the same name as a standard Network from
    colliding in the vlans hash
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1430709

 app/models/miq_provision_virt_workflow.rb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comment 7 Leo Khomenko 2017-03-24 21:16:40 UTC
verified on 5.8.0.7 but found new bug with auto_placement
https://bugzilla.redhat.com/show_bug.cgi?id=1435814

Comment 9 errata-xmlrpc 2017-05-31 14:41:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:1367