Description of problem: When provisioning a new host, we need to specify a storage_pod or datastore to place the hard disk into. redhat.satellite.host: [...] compute_attributes: volumes_attributes: '0': size_gb: '42' storage_pod: STORAGE_POD_NAME However, we should allow the provider (VMware in this case) to give us a storage_pod to use for our cluster/datacenter. If this is not feasible, then the collection should provide a facility to query for available storage_pods in the current compute_resource, similar to the https://satellite.server.com/api/compute_resources/:id/available_storage_pods API endpoint. Example output of `hammer compute-resource storage-pods --id 1` ------------|--------- ID | NAME ------------|--------- group-p123 | DC-01-01 group-p234 | DC-01-02 ------------|--------- Version-Release number of selected component (if applicable): 1.3.0 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
upstream PR merged
Upon review of our valid but aging backlog the Satellite Team has concluded that this Bugzilla does not meet the criteria for a resolution in the near term, and are planning to close in a month. This message may be a repeat of a previous update and the bug is again being considered to be closed. If you have any concerns about this, please contact your Red Hat Account team. Thank you.
Hello Team, We have another scenario where using "storage_pod" as direct value changes name to ID. The Playbook in reference is to create "Compute Profile". Change in value affects deployment of new servers. Kindly consider this scenario for further check.