Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1750758 - Disks backing for VMWare are not being very clear to user
Summary: Disks backing for VMWare are not being very clear to user
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Compute Resources - VMWare
Version: 6.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: Lukáš Hellebrandt
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-10 12:40 UTC by Jitendra Yejare
Modified: 2023-12-15 16:45 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-18 19:44:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Before and After mismatch in Datastore/Cluster selection (67.59 KB, application/zip)
2019-09-10 12:44 UTC, Jitendra Yejare
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 29848 0 Normal New Disks on VMWare VMs can't be provisioned to all possible variations offered by UI 2020-09-02 16:22:50 UTC
Red Hat Bugzilla 1746175 0 unspecified CLOSED Adding a 2nd disk type of storage_pod/datastore_cluster fails to create vm 2021-02-22 00:41:40 UTC

Description Jitendra Yejare 2019-09-10 12:40:41 UTC
Description of problem:

VMWare VM is being provisioned but the disks are assigned from non-selected(while provisioning) data store.
This also includes the iSCSI cluster storage pod disk is not being selected and instead of that, the disk is being selected from Datastore.

Steps to Reproduce:
----------------

1. Attempt to provision a VM on VMWare.
2. Select 1 disk from DataStore.
3. Select another from Storage Pod - iSCSI cluster.
4. Provision the VM.


Actual Behavior:
----------------
1. VM is provisioned but both the disks are assigned from non-selected(while provisioning) data store.
2. This also includes the iSCSI cluster storage pod disk is not being selected.


Attaching before and after screenshots disks selected on provisioning.


Expected Behavior:
---------------------
1. VM should be provisioned and should assign disk from the selected(while provisioning) data store.
2. The disk should be assigned from iSCSI cluster storage pod if selected.

Note:
-----------
Even at VMWare end these disk mismatch is happening, so it's not  satellite end UI issue.

Comment 3 Jitendra Yejare 2019-09-10 12:44:42 UTC
Created attachment 1613574 [details]
Before and After mismatch in Datastore/Cluster selection

Comment 5 Ondřej Ezr 2019-09-10 14:20:45 UTC
The flow for vmware VMs is in most cases you eighter care about what datastore the machine gets on to, or you don't and you trust in the DRS placement (what is the storagePod for).
So placing one disk on datastore and the other on storagePod doesn't make much sense IMHO. Taking look at vCenter UI it is even almost impossible to set up.

Therefore I believe it is an issue of Satellite UI.
What is actually happening in this case is:

1) one storagePod is selected, so internaly we ignore the datastores and we act like all the disks would have selected the storagePod - the first found storagePod is used for all the disk, where no storagePod is selected (datastore is selected).
2) storagePod is only a group of datastores, so it is used to recommend the best datastore to use. So from first step we have all of them on the one storagePod, we ask for recommended datastore and than the VM gets created on that storagePod.

This is really confusing and I believe this *BZ should be resolved accordingly* in Satellite UI:

1) add an option to select one datastore/storagePod for the VM (the VM info file and all the disks),
2) keep possibility of changing it on per disk basis, but disabling datastore selection once one storagePod gets selected.

Comment 6 Jitendra Yejare 2019-09-13 07:05:12 UTC
Just an update, This is not a regression and the same issue appears in Satellite 6.5 as well.

Comment 8 Ondřej Ezr 2020-05-15 17:46:10 UTC
Created redmine issue https://projects.theforeman.org/issues/29848 from this bug

Comment 9 Jitendra Yejare 2021-11-08 12:02:11 UTC
Resetting the QA Contact for `qe_test_coverage` flag decision / implementation if set to `+`.

Comment 10 Mike McCune 2022-01-28 22:33:49 UTC
Upon review of our valid but aging backlog the Satellite Team has concluded that this Bugzilla does not meet the criteria for a resolution in the near term, and are planning to close in a month. This message may be a repeat of a previous update and the bug is again being considered to be closed. If you have any concerns about this, please contact your Red Hat Account team. Thank you.

Comment 11 Mike McCune 2022-03-18 19:44:22 UTC
Thank you for your interest in Red Hat Satellite. We have evaluated this request, and while we recognize that it is a valid request, we do not expect this to be implemented in the product in the foreseeable future. This is due to other priorities for the product, and not a reflection on the request itself. We are therefore closing this out as WONTFIX. If you have any concerns about this feel free to contact your Red Hat Account Team. Thank you.


Note You need to log in before you can comment on or make changes to this bug.