This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1280402 - Common datastore across multiple vcenter causes inventory confusion for provisions
Common datastore across multiple vcenter causes inventory confusion for provi...
Status: CLOSED CURRENTRELEASE
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers (Show other bugs)
5.4.0
All All
high Severity high
: GA
: 5.7.0
Assigned To: Adam Grare
Alex Newman
vsphere:datastore:provision:error
: TestOnly, ZStream
Depends On:
Blocks: 1289742 1337552
  Show dependency treegraph
 
Reported: 2015-11-11 11:42 EST by Josh Carter
Modified: 2017-01-11 23:41 EST (History)
9 users (show)

See Also:
Fixed In Version: 5.7.0.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1289742 1337552 (view as bug list)
Environment:
Last Closed: 2017-01-11 15:28:00 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Josh Carter 2015-11-11 11:42:15 EST
Description of problem:

Datastore nfs-freenas01 is common across 3 vcenters. 

Provision Virtual Machines - Select a Template = cbts-rhel71-nics on dc-vcenter
Request = No Changes
Purpose = No Changes
Catalog = VM Name = changeme
Environment = Choose Automatically
Hardware = No Changes
Network = Change to relevant network, VM_Network in this case
Customize = No Changes
Schedule = Uncheck Power on virtual machines after creation

Tried several deploys using the above configuration:
1 - with 3 providers, dc-vcenter was not the most recently refreshed - FAILED with same message
2 - removed other 2 providers and their orphaned / archived VMs, dc-vcenter was not refreshed - FAILED with same message
3 - manually refreshed dc-vcenter via the GUI - WORKED

two failures both using a datastore name nfs-freenas01 using the same vm-ref id of datastore-84.  The last provision that appears to be successful
is using the same datastore name  of nfs-freenas01 but it has a vm-ref id of datastore-202. 

[----] I, [2015-11-10T14:35:34.095714 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000014]) MIQ(MiqProvisionVmware#log_clone_options) Destination Datastore:      [nfs-freenas01 (datastore-84)]
[----] I, [2015-11-10T14:35:34.106912 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000014]) MIQ(MiqProvisionVmware#log_clone_options) Prov Options: [:placement_ds_name][1](String) = "nfs-freenas01"
[----] I, [2015-11-10T14:35:34.112761 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000014]) MIQ(MiqProvisionVmware#log_clone_options) Prov Options: [:dest_storage][1](String) = "nfs-freenas01"


[----] I, [2015-11-10T14:43:09.480317 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000015]) MIQ(MiqProvisionVmware#log_clone_options) Destination Datastore:      [nfs-freenas01 (datastore-84)]
[----] I, [2015-11-10T14:43:09.491100 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000015]) MIQ(MiqProvisionVmware#log_clone_options) Prov Options: [:placement_ds_name][1](String) = "nfs-freenas01"
[----] I, [2015-11-10T14:43:09.496761 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000015]) MIQ(MiqProvisionVmware#log_clone_options) Prov Options: [:dest_storage][1](String) = "nfs-freenas01"

[----] I, [2015-11-10T14:47:26.916368 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000016]) MIQ(MiqProvisionVmware#log_clone_options) Destination Datastore:      [nfs-freenas01 (datastore-202)]
[----] I, [2015-11-10T14:47:26.930504 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000016]) MIQ(MiqProvisionVmware#log_clone_options) Prov Options: [:placement_ds_name][1](String) = "nfs-freenas01"
[----] I, [2015-11-10T14:47:26.936400 #2525:367e8c]  INFO -- : Q-task_id([miq_provision_123000000000016]) MIQ(MiqProvisionVmware#log_clone_options) Prov Options: [:dest_storage][1](String) = "nfs-freenas01"

 
dc-vcenter most recently refreshed
{:name=>"r5c31g_local", :id=>123000000000006, :ems_ref=>"datastore-7084", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r1b41g_local", :id=>123000000000007, :ems_ref=>"datastore-7093", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r5c41g_local", :id=>123000000000008, :ems_ref=>"datastore-7094", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r6b41g_local", :id=>123000000000009, :ems_ref=>"datastore-7095", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"nfs-freenas01", :id=>123000000000001, :ems_ref=>"datastore-202", :ext_management_systems=>[{:name=>"dc-vcenter"}, {:name=>"vsanvcenter"}]}
{:name=>"local-dc-esxi01", :id=>123000000000004, :ems_ref=>"datastore-43", :ext_management_systems=>[{:name=>"dc-vcenter"}]}
{:name=>"local-dc-esxi02", :id=>123000000000003, :ems_ref=>"datastore-46", :ext_management_systems=>[{:name=>"dc-vcenter"}]}
{:name=>"local-dc-esxi03", :id=>123000000000002, :ems_ref=>"datastore-48", :ext_management_systems=>[{:name=>"dc-vcenter"}]}
{:name=>"nfs0", :id=>123000000000005, :ems_ref=>"datastore-1483", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"vsanDatastore", :id=>123000000000011, :ems_ref=>"datastore-7200", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r6c41g_local", :id=>123000000000010, :ems_ref=>"datastore-7096", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r6b21g_local", :id=>123000000000012, :ems_ref=>"datastore-7630", :ext_management_systems=>[{:name=>"vsanvcenter"}]}

vsanvcenter most recently refreshed
{:name=>"local-dc-esxi01", :id=>123000000000004, :ems_ref=>"datastore-43", :ext_management_systems=>[{:name=>"dc-vcenter"}]}
{:name=>"local-dc-esxi02", :id=>123000000000003, :ems_ref=>"datastore-46", :ext_management_systems=>[{:name=>"dc-vcenter"}]}
{:name=>"local-dc-esxi03", :id=>123000000000002, :ems_ref=>"datastore-48", :ext_management_systems=>[{:name=>"dc-vcenter"}]}
{:name=>"r5c31g_local", :id=>123000000000006, :ems_ref=>"datastore-7084", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r1b41g_local", :id=>123000000000007, :ems_ref=>"datastore-7093", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r5c41g_local", :id=>123000000000008, :ems_ref=>"datastore-7094", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r6b41g_local", :id=>123000000000009, :ems_ref=>"datastore-7095", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"nfs-freenas01", :id=>123000000000001, :ems_ref=>"datastore-9448", :ext_management_systems=>[{:name=>"dc-vcenter"}, {:name=>"vsanvcenter"}]}
{:name=>"nfs0", :id=>123000000000005, :ems_ref=>"datastore-1483", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"vsanDatastore", :id=>123000000000011, :ems_ref=>"datastore-7200", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r6c41g_local", :id=>123000000000010, :ems_ref=>"datastore-7096", :ext_management_systems=>[{:name=>"vsanvcenter"}]}
{:name=>"r6b21g_local", :id=>123000000000012, :ems_ref=>"datastore-7630", :ext_management_systems=>[{:name=>"vsanvcenter"}]}

If the wrong vm-ref id is used during a provision the follow error occurs. 

[----] E, [2015-11-10T11:37:18.764971 #2528:89be94] ERROR -- : Q-task_id([miq_provision_123000000000002]) MIQ(MiqProvisionVmware#provision_error) [[Handsoap::Fault]: Handsoap::Fault { :code => 'ServerFaultCode', :reason => 'The object has already been deleted or has not been completely created' }] encountered during phase [start_clone_task]


Version-Release number of selected component (if applicable): 5.4.2


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 2 Greg Blomquist 2016-02-19 09:44:49 EST
Adam, I think we had been waiting on some feedback from John Hardy for some reason on this bug.

See what you can make of it.

There's a downstream version of this bug as well that I'll assign your way, too.
Comment 3 Adam Grare 2016-02-22 16:12:19 EST
Greg, the problem is that storage is linked to multiple ext_management_systems but only gets one ems_ref.  Automate is then using whatever the ems_ref was set to last.

We are using the datastore.summary.url as the unique identifier for storage, so even though the datastores have different MORs they are linked up to the same storage record.

To fix this will require using a new field for the unique identifier, or storing the different MORs for the same storage somewhere besides the storage record.
Comment 7 CFME Bot 2016-07-12 00:41:33 EDT
New commit detected on ManageIQ/manageiq/master:
https://github.com/ManageIQ/manageiq/commit/61b44b7458fc102137d2d6a58220f4980d7e163e

commit 61b44b7458fc102137d2d6a58220f4980d7e163e
Author:     Adam Grare <agrare@redhat.com>
AuthorDate: Tue Jul 5 16:00:03 2016 -0400
Commit:     Adam Grare <agrare@redhat.com>
CommitDate: Fri Jul 8 17:15:49 2016 -0400

    Use host_storage ems_ref for datastore
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1280402

 .../manageiq/providers/vmware/infra_manager/provision/cloning.rb      | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
Comment 8 CFME Bot 2016-08-30 11:30:54 EDT
New commit detected on ManageIQ/manageiq/darga:
https://github.com/ManageIQ/manageiq/commit/53ce79073c5a5ce84a293715a59d10fc441b762d

commit 53ce79073c5a5ce84a293715a59d10fc441b762d
Author:     Adam Grare <agrare@redhat.com>
AuthorDate: Tue Jul 5 16:00:03 2016 -0400
Commit:     Adam Grare <agrare@redhat.com>
CommitDate: Thu Aug 25 08:48:34 2016 -0400

    Use host_storage ems_ref for datastore
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1280402

 .../manageiq/providers/vmware/infra_manager/provision/cloning.rb      | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Note You need to log in before you can comment on or make changes to this bug.