Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1607408 - VM getting provisioned with the wrong storage.
Summary: VM getting provisioned with the wrong storage.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Compute Resources - VMWare
Version: 6.3.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: 6.6.0
Assignee: Ondřej Ezr
QA Contact: Sanket Jagtap
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-23 13:13 UTC by Suraj Patil
Modified: 2019-10-22 19:47 UTC (History)
12 users (show)

Fixed In Version: tfm-rubygem-fog-vsphere-3.1.1-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-22 19:47:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
VM present in both clusters (102.88 KB, image/png)
2019-07-18 14:25 UTC, Sanket Jagtap
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 25013 0 Normal Closed VM getting provisioned with the wrong storage. 2020-06-11 17:25:24 UTC
Github fog fog-vsphere pull 213 0 None closed Fixes #212 - enable multiple storage clusters 2020-06-11 17:25:24 UTC
Red Hat Bugzilla 1419060 0 medium CLOSED [RFE] Auto selection of datastore while provisioning the host in VMware. 2021-12-10 14:53:40 UTC

Internal Links: 1419060

Description Suraj Patil 2018-07-23 13:13:34 UTC
Description of problem:

When creating the VM on VMware, if we create two storage and select two different data clusters in them. Then VM is created only on the first cluster of first storage.

If we directly select the datastore from two different data clusters then VM gets create successfully.


How reproducible:
Create a new host.

Steps to Reproduce:
1. Deploy on VMware (compute resource)
2. Go to Virtual Machine tab and add two storage.
3. In first storage, Select any Data cluster.
4. In second storage, Select Data cluster different from the first.
5. Create the VM.
6. Go to vSphere client, see the storage configuration of created VM. Also, can be cross-checked by clicking on edit VM on satellite.

Actual results:
VM is created only on one data cluster.

Expected results:
VM should have two storage from two different data clusters.

Additional info:

This is caused because API call (JSON packet) is created wrongly, Only data cluster( "storage_pod") must be sent in the API call but data store is also forwarded with the packet. Below is the related upstream bug.

https://projects.theforeman.org/issues/19311

Also, related to - https://bugzilla.redhat.com/show_bug.cgi?id=1489516

Comment 9 Marek Hulan 2018-09-24 06:43:05 UTC
Created redmine issue https://projects.theforeman.org/issues/25013 from this bug

Comment 11 Bryan Kearney 2019-01-21 09:07:35 UTC
Upstream bug assigned to chrobert

Comment 12 Bryan Kearney 2019-01-30 15:07:55 UTC
Upstream bug assigned to oezr

Comment 15 Sanket Jagtap 2019-03-22 08:26:16 UTC
Build: Satellite snap20 


Parameters: {"utf8"=>"✓", "authenticity_token"=>"X4P5P2J/FVcdnokiv6bsCC0pCmzTz6JreHKOVII9wA2zhFLF5VbTxoCg7GTdLaD8MKHaZyDzkA99T6TtLpIKoA==", "host"=>{"name"=>"carla-dinham", "organization_id"=>"1", "location_id"=>"2", "hostgroup_id"=>"2", "compute_resource_id"=>"3", "content_facet_attributes"=>{"lifecycle_environment_id"=>"2", "content_view_id"=>"2", "content_source_id"=>"1", "kickstart_repository_id"=>"19"}, "ansible_role_ids"=>[""], "puppetclass_ids"=>[""], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "interfaces_attributes"=>{"0"=>{"_destroy"=>"0", "type"=>"Nic::Managed", "mac"=>"", "identifier"=>"", "name"=>"carla-dinham", "domain_id"=>"1", "subnet_id"=>"2", "ip"=>"10.8.", "ip6"=>"", "managed"=>"1", "primary"=>"1", "provision"=>"1", "execution"=>"1", "virtual"=>"0", "tag"=>"", "attached_to"=>"", "compute_attributes"=>{"type"=>"VirtualVmxnet3", "network"=>"network-152"}}}, "compute_attributes"=>{"cpus"=>"1", "corespersocket"=>"1", "memory_mb"=>"2048", "firmware"=>"bios", "cluster"=>"Satellite_Engineering", "resource_pool"=>"Resources", "path"=>"/Datacenters/RH_Engineering/vm", "guest_id"=>"otherGuest", "hardware_version"=>"Default", "memoryHotAddEnabled"=>"0", "cpuHotAddEnabled"=>"0", "add_cdrom"=>"0", "start"=>"1", "annotation"=>"", "scsi_controllers"=>"{\"scsiControllers\":[{\"type\":\"VirtualLsiLogicController\",\"key\":1000}],\"volumes\":[{\"thin\":true,\"name\":\"Hard disk\",\"mode\":\"persistent\",\"controllerKey\":1000,\"size\":10485760,\"sizeGb\":10,\"storagePod\":\"iSCSI-Cluster\",\"datastore\":null},{\"thin\":false,\"name\":\"Hard disk\",\"mode\":\"persistent\",\"controllerKey\":1000,\"size\":1048576,\"sizeGb\":1,\"datastore\":null,\"storagePod\":\"TestStorageCluster\",\"eagerZero\":false,\"eagerzero\":false}]}"}, "architecture_id"=>"1", "operatingsystem_id"=>"1", "provision_method"=>"build", "build"=>"1", "medium_id"=>"", "ptable_id"=>"98", "pxe_loader"=>"PXELinux BIOS", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"4-Users", "enabled"=>"1", "comment"=>"", "overwrite"=>"false"}, "media_selector"=>"synced_content"}
2019-03-22T04:16:57 [I|app|d10d84da] Current user set to admin (admin)
2019-03-22T04:16:57 [I|app|d10d84da] Adding Compute instance for carla-dinham*

I see in production logs, both the volumes are being sent, but still on Vmware end , only the 1st cluster is used for creating the 2 volumes. This can also be observed when we edit the host from satellite.

Comment 17 Xin Guo 2019-05-29 14:00:15 UTC
Hi,

This is Xin, CSM for IKEA in Sweden and good to know the product team here. 
I got an escalation from the customer directly and they are checking the status for this bug. Since it is very critical for their deployment and it is the only showstopper here for quite a long time. Could you please provide an update for it? We need to communicate to the customer as soon as possible. 
Thanks for help. 
PS: related to case 02311449. 

Cheers,
Xin

Comment 20 Ondřej Ezr 2019-06-17 08:29:29 UTC
This has beens fixed and the ability of provisioning on multiple DRS-enabled storage clusters is now possible in fog-vsphere 3.1.1 version, which is enabled for packaging in upstream for Foreman v 1.23.

Comment 22 Sanket Jagtap 2019-07-18 14:12:54 UTC
Build: Satellite 6.6 snap 11

1) Create a VM 
2) Used two different datastore cluster for two volumes(harddisk)

Provisioning was successful

Logs:
2019-07-18T09:58:11 [I|app|3db6b219]   Parameters: {"utf8"=>"✓", "authenticity_token"=>"l5fULpTRobvO82obO009AEua31dgeJXyJjoQjArR2swUGrTChqUukxA0e9X//PQx0kHIm2nvlEH84nMWkId0ng==", "host"=>{"name"=>"tim-mozee", "organization_id"=>"1", "location_id"=>"2", "hostgroup_id"=>"2", "compute_resource_id"=>"2", "content_facet_attributes"=>{"lifecycle_environment_id"=>"4", "content_view_id"=>"10", "content_source_id"=>"1", "kickstart_repository_id"=>"20"}, "openscap_proxy_id"=>"1", "puppetclass_ids"=>[""], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "interfaces_attributes"=>{"0"=>{"_destroy"=>"0", "type"=>"Nic::Managed", "mac"=>"", "identifier"=>"", "name"=>"tim-mozee", "domain_id"=>"1", "subnet_id"=>"1", "ip"=>"0.0.215.224", "ip6"=>"", "managed"=>"1", "primary"=>"1", "provision"=>"1", "execution"=>"1", "virtual"=>"0", "tag"=>"", "attached_to"=>"", "compute_attributes"=>{"type"=>"VirtualE1000", "network"=>"dvportgroup-647"}}}, "compute_attributes"=>{"cpus"=>"1", "corespersocket"=>"1", "memory_mb"=>"4048", "firmware"=>"automatic", "cluster"=>"Satellite_Engineering", "resource_pool"=>"Resources", "path"=>"/Datacenters/RH_Engineering/vm", "guest_id"=>"otherGuest", "hardware_version"=>"Default", "memoryHotAddEnabled"=>"0", "cpuHotAddEnabled"=>"0", "add_cdrom"=>"0", "start"=>"1", "annotation"=>"", "scsi_controllers"=>"{\"scsiControllers\":[{\"type\":\"VirtualLsiLogicController\",\"key\":1000}],\"volumes\":[{\"thin\":false,\"name\":\"Hard disk\",\"mode\":\"persistent\",\"controllerKey\":1000,\"size\":10485760,\"sizeGb\":10,\"storagePod\":\"TestDatastoreCluster\",\"datastore\":null},{\"sizeGb\":5,\"datastore\":null,\"storagePod\":\"iSCSI-Cluster\",\"thin\":false,\"eagerZero\":false,\"name\":\"Hard disk\",\"mode\":\"persistent\",\"controllerKey\":1000}]}"}, "architecture_id"=>"1", "operatingsystem_id"=>"1", "provision_method"=>"build", "build"=>"1", "medium_id"=>"", "ptable_id"=>"97", "pxe_loader"=>"PXELinux BIOS", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"4-Users", "enabled"=>"1", "model_id"=>"", "comment"=>"", "overwrite"=>"false"}, "media_selector"=>"synced_content"}
2019-07-18T09:58:41 [I|app|9e9f08c5]   Parameters: {"utf8"=>"✓", "authenticity_token"=>"kdIX/CVbgBwq0U5SUZCtaL4C72aaAU7lUYSWm5GTg0kSX3cQNy8PNPQWX5yVIWRZJ9n4qpOWT1aLXPUBC8UtGw==", "host"=>{"name"=>"tim-mozee", "organization_id"=>"1", "location_id"=>"2", "hostgroup_id"=>"2", "compute_resource_id"=>"2", "content_facet_attributes"=>{"lifecycle_environment_id"=>"4", "content_view_id"=>"10", "content_source_id"=>"1", "kickstart_repository_id"=>"20"}, "openscap_proxy_id"=>"1", "puppetclass_ids"=>[""], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "interfaces_attributes"=>{"0"=>{"_destroy"=>"0", "type"=>"Nic::Managed", "mac"=>"", "identifier"=>"", "name"=>"tim-mozee", "domain_id"=>"1", "subnet_id"=>"1", "ip"=>"0.0.215.46", "ip6"=>"", "managed"=>"1", "primary"=>"1", "provision"=>"1", "execution"=>"1", "virtual"=>"0", "tag"=>"", "attached_to"=>"", "compute_attributes"=>{"type"=>"VirtualVmxnet3", "network"=>"dvportgroup-680"}}}, "compute_attributes"=>{"cpus"=>"1", "corespersocket"=>"1", "memory_mb"=>"4048", "firmware"=>"bios", "cluster"=>"Satellite_Engineering", "resource_pool"=>"Resources", "path"=>"/Datacenters/RH_Engineering/vm", "guest_id"=>"otherGuest", "hardware_version"=>"Default", "memoryHotAddEnabled"=>"0", "cpuHotAddEnabled"=>"0", "add_cdrom"=>"0", "start"=>"1", "annotation"=>"", "scsi_controllers"=>"{\"scsiControllers\":[{\"type\":\"VirtualLsiLogicController\",\"key\":1000}],\"volumes\":[{\"thin\":false,\"name\":\"Hard disk\",\"mode\":\"persistent\",\"controllerKey\":1000,\"size\":10485760,\"sizeGb\":10,\"storagePod\":\"TestDatastoreCluster\",\"datastore\":null},{\"thin\":false,\"name\":\"Hard disk\",\"mode\":\"persistent\",\"controllerKey\":1000,\"size\":5242880,\"sizeGb\":5,\"datastore\":null,\"storagePod\":\"iSCSI-Cluster\",\"eagerZero\":false}]}", "image_id"=>""}, "architecture_id"=>"1", "operatingsystem_id"=>"1", "provision_method"=>"build", "build"=>"1", "medium_id"=>"", "ptable_id"=>"97", "pxe_loader"=>"PXELinux BIOS", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"4-Users", "enabled"=>"1", "comment"=>"", "overwrite"=>"false"}, "media_selector"=>"synced_content"}

PFA Attachments

Comment 23 Sanket Jagtap 2019-07-18 14:25:38 UTC
Created attachment 1591814 [details]
VM present in both clusters

Comment 26 Bryan Kearney 2019-10-22 19:47:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:3172


Note You need to log in before you can comment on or make changes to this bug.