Bug 1469458

Summary: Adding disks to RHV miq_provision object fails when duplicate datastore names exist
Product: Red Hat CloudForms Management Engine Reporter: Dustin Scott <dscott>
Component: ProvidersAssignee: Oved Ourfali <oourfali>
Status: CLOSED DUPLICATE QA Contact: Dave Johnson <dajohnso>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.7.0CC: dberger, dscott, gblomqui, jfrey, jhardy, masayag, obarenbo
Target Milestone: GA   
Target Release: cfme-future   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-11 13:21:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: RHEVM Target Upstream Version:
Embargoed:

Description Dustin Scott 2017-07-11 10:05:01 UTC
Description of problem:

When provisioning to multiple RHV providers and attempting to add additional disks, a :datastore name is required in the :disk_scsi miq_provision option.  The following snippet of code works with a single RHV provider and/or multiple RHV providers when attempting to add a disk to a datastore which is named uniquely between the 2 RHV providers:

        new_disks = [
          {
            :bus => 0,   # not sure if applicable, but works...taken from vmware method
            :pos => 1,   # not sure if applicable, but works...taken from vmware method
            :sizeInMB => data_disk_size_mb,
            :datastore => datastore_name
          }
        ]  
        $evm.log(:info, "Inspecting new_disks: #{new_disks.inspect}")
        
        # set the provisioning option
        $evm.root['miq_provision'].set_option(:disk_scsi, new_disks)

The problem lies with line 42 of /var/www/miq/vmdb/app/models/manageiq/providers/redhat/infra_manager/provision/disk.rb:

  storage = Storage.find_by(:name => storage_name)

which produces only a single storage object.  If there are 2 RHV providers (RHVM1, RHVM2) and both RHV providers have the same datastore name, the wrong datastore could be selected, based on where the VM is being provisioned to.  The following error is produced, and the provision is aborted, when the incorrect storage object is selected:

[----] I, [2017-07-10T17:56:30.669529 #12387:7798a18]  INFO -- : Q-task_id([service_template_provision_task_10000000001049]) <AutomationEngine> Calling Create Notification type: automate_user_error subject type: MiqRequest id: 10000000000445 options: {:message=>"Service Provision Error: Server [dallpcfinf03] Service [Deploy Splunk Server - Dallas-20170710-174554] Step [checkprovisioned] Status [[OvirtSDK4::Error]: Fault reason is \"Operation Failed\". Fault detail is \"Entity not found: d915b633-85f6-4ea2-b2f9-4dfb6aecf6db\". HTTP response code is 404.] Message [[OvirtSDK4::Error]: Fault reason is \"Operation Failed\". Fault detail is \"Entity not found: d915b633-85f6-4ea2-b2f9-4dfb6aecf6db\". HTTP response code is 404.] "}


Here are some snippets from rails showing the problem (notice the UUID above):

ONE DATASTORE RETURNED WHICH MATCHES THE NAME:
=======================
irb(main):007:0> storage = Storage.find_by(:name => 'data').ems_ref
=> "/api/storagedomains/d915b633-85f6-4ea2-b2f9-4dfb6aecf6db"

MULTIPLE DATASTORES ACTUALLY EXIST WHICH MATCH THE NAME:
===================================
irb(main):008:0> storages = Storage.where(:name => 'data')
=> #<ActiveRecord::Relation [#<Storage id: 10000000000001, name: "data", store_type: "NFS", total_space: 5546950262784, free_space: 4953171034112, created_on: "2017-05-17 16:31:11", updated_on: "2017-07-11 09:47:35", multiplehostaccess: 1, location: "hcfncs01n02.na.xom.com:/hoelprvinf_data", last_scan_on: nil, uncommitted: 1757715365888, last_perf_capture_on: "2017-07-11 09:00:00", ems_ref_obj: "--- \"/api/storagedomains/d915b633-85f6-4ea2-b2f9-4...", directory_hierarchy_supported: nil, thin_provisioning_supported: nil, raw_disk_mappings_supported: nil, master: true, ems_ref: "/api/storagedomains/d915b633-85f6-4ea2-b2f9-4dfb6a...", storage_domain_type: "data">, #<Storage id: 10000000000010, name: "data", store_type: "GLUSTERFS", total_space: 7694433910784, free_space: 7626788175872, created_on: "2017-05-17 20:38:15", updated_on: "2017-07-11 09:01:46", multiplehostaccess: 1, location: "dallprvinf01.na.xom.com:/data", last_scan_on: nil, uncommitted: 7525856444416, last_perf_capture_on: "2017-07-11 09:00:00", ems_ref_obj: "--- \"/api/storagedomains/a1f8d8f3-9a0c-4c71-bd03-d...", directory_hierarchy_supported: nil, thin_provisioning_supported: nil, raw_disk_mappings_supported: nil, master: false, ems_ref: "/api/storagedomains/a1f8d8f3-9a0c-4c71-bd03-dac618...", storage_domain_type: "data">]>
irb(main):009:0> storages.length
=> 2


Version-Release number of selected component (if applicable):

cfme-5.7.2.1-1.el7cf.x86_64


How reproducible:

Intermittent.  Storage object returns a datastore which matches the name, but might not be in the correct provider in which the provision was requested.


Steps to Reproduce:
1. Add RHVM1 provider to CFME
2. Add RHVM2 provider to CFME
3. Add datastore named 'data' to RHVM1
4. Add datastore named 'data' to RHVM2
5. In the /Infrastructure/VM/Provisioning/StateMachines/Methods/redhat_PreProvision method, insert this code:

        new_disks = [
          {
            :bus => 0,   # not sure if applicable, but works...taken from vmware method
            :pos => 1,   # not sure if applicable, but works...taken from vmware method
            :sizeInMB => data_disk_size_mb,
            :datastore => datastore_name
          }
        ]  
        $evm.log(:info, "Inspecting new_disks: #{new_disks.inspect}")
        
        # set the provisioning option
        $evm.root['miq_provision'].set_option(:disk_scsi, new_disks)

6. Create a catalog item to provision to RHVM1
7. Create a catalog item to provision to RHVM2
8. Provision to RHVM1
9. Provision to RHVM2

Actual results:

Observe that one of the provisions fails because it cannot locate the correct storage.  The other provision succeeds and has an additional 1GB disk attached to it.


Expected results:

Both provisions succeed and have additional disks associated.


Additional info:

A workaround to this method is to wait until after the provision and use the VM object with the add_disk method:

  $evm.root['vm'].add_disk('data', 1024, :sync => true)

The concern with the above method is that the VM is powered on and potentially might not show up in the OS without a reboot.  Using the RedHat provided rhel-guest-image currently does make the disk present, however, future patches with RHV or RHEL may affect this.  If the disk is added during a provision, the disk will be guaranteed to be present in the OS, regardless of future patches to RHV/RHEL.

Comment 2 Dustin Scott 2017-07-11 10:07:43 UTC
Left out a portion of code which includes defined variables in the above example (was trying to simplify).  Please use the following correction instead to reproduce:

        new_disks = [
          {
            :bus => 0,   # not sure if applicable, but works...taken from vmware method
            :pos => 1,   # not sure if applicable, but works...taken from vmware method
            :sizeInMB  => 1024,
            :datastore => 'data'
          }
        ]  
        $evm.log(:info, "Inspecting new_disks: #{new_disks.inspect}")
        
        # set the provisioning option
        $evm.root['miq_provision'].set_option(:disk_scsi, new_disks)

Comment 5 Oved Ourfali 2017-07-11 13:21:36 UTC

*** This bug has been marked as a duplicate of bug 1462774 ***