Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2211394 - Unable to use Satellite\Foreman ansible modules to create compute_profiles when the Cluster is nested inside a folder under the datacenter of VMware vCenter
Summary: Unable to use Satellite\Foreman ansible modules to create compute_profiles wh...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Ansible Collection
Version: 6.13.0
Hardware: All
OS: All
high
high
Target Milestone: 6.14.0
Assignee: Evgeni Golov
QA Contact: Griffin Sullivan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-31 12:03 UTC by Sayan Das
Modified: 2023-11-08 14:19 UTC (History)
4 users (show)

Fixed In Version: ansible-collection-redhat-satellite-3.11.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 14:19:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github theforeman foreman-ansible-modules pull 1615 0 None Draft find nested vmware clusters 2023-05-31 13:28:19 UTC
Red Hat Issue Tracker SAT-18028 0 None None None 2023-06-01 12:57:56 UTC
Red Hat Product Errata RHSA-2023:6818 0 None None None 2023-11-08 14:19:42 UTC

Description Sayan Das 2023-05-31 12:03:33 UTC
Description of problem:

The redhat.satellite.compute_profile module works generally great but when the Cluster in VMware environment is actually a Nested Cluster, The compute_profile creation fails with "404 not found" or "500 ISE".

The part that fails is the use of Nested Cluster name to search for Networks or Storage DOmains or Storage pods


Version-Release number of selected component (if applicable):

Satellite 6.11\6.12\6.13


How reproducible:

Always
 - Customer tested on 6.11
 - Support tested on 6.13

Steps to Reproduce:
1. Configure a VMware vCenter to have Nested Cluster
2. Install a Satellite 6.13 and create a VMware compute resource by hand in Satellite UI for the same vCenter in Step 1.
3. Some information collected from compute resource via hammer:

# hammer compute-resource clusters --id 2
-----------|-----------------|--------------------|-------|---------------------------
ID         | NAME            | DATACENTER         | HOSTS | CLUSTER PATH              
-----------|-----------------|--------------------|-------|---------------------------
domain-c20 | Cluster-HomeLab | Home_DC/DC-HomeLab | 2     | PRODUCTION/Cluster-HomeLab
-----------|-----------------|--------------------|-------|---------------------------

# hammer compute-resource networks --id 2
-----------|------------|--------------------|----------------|--------
ID         | NAME       | DATACENTER         | VIRTUAL SWITCH | VLAN ID
-----------|------------|--------------------|----------------|--------
network-14 | VM Network | Home_DC/DC-HomeLab |                |        
-----------|------------|--------------------|----------------|--------

# hammer compute-resource storage-domains --id 2
-------------|----------------
ID           | NAME           
-------------|----------------
datastore-18 | esx11-datastore
datastore-13 | esx12-datastore
-------------|----------------

# hammer compute-resource storage-pods --id 2
------------|------------|-------------------
ID          | NAME       | DATACENTER        
------------|------------|-------------------
group-p1002 | DS-Cluster | Home_DC/DC-HomeLab
------------|------------|-------------------

# hammer compute-resource folders --id 2
------------|---------|------------|------------|--------------------------------------------------|-----
ID          | NAME    | PARENT     | DATACENTER | PATH                                             | TYPE
------------|---------|------------|------------|--------------------------------------------------|-----
group-v5    | vm      | DC-HomeLab | DC-HomeLab | /Datacenters/Home_DC/DC-HomeLab/vm               | vm  
group-v2002 | SERVERS | vm         | DC-HomeLab | /Datacenters/Home_DC/DC-HomeLab/vm/SERVERS       | vm  
group-v2003 | Sayan   | SERVERS    | DC-HomeLab | /Datacenters/Home_DC/DC-HomeLab/vm/SERVERS/Sayan | vm  
group-v22   | vCLS    | vm         | DC-HomeLab | /Datacenters/Home_DC/DC-HomeLab/vm/vCLS          | vm  
group-v2001 | vCenter | vm         | DC-HomeLab | /Datacenters/Home_DC/DC-HomeLab/vm/vCenter       | vm  
------------|---------|------------|------------|--------------------------------------------------|-----

4. Use the default 3.9.0 version of ansible-collection-redhat-satellite 

--> Create a ansible.cfg

--> A playbook to create the CP

# cat compute_profile.yaml 
---
- name: create compute profile
  hosts: localhost
  gather_facts: false
  vars:
    compute_resource_name: 'My_vCenter'
    satellite_compute_profile: 'Test_CP1'
    kerberos_id: 'saydas'
    compute_network: 'VM Network'
    vmware_datacenter: 'Home_DC/DC-HomeLab'
    vmware_cluster_name: 'Cluster-HomeLab'
    vmware_cluster_full_path: 'PRODUCTION/Cluster-HomeLab'
    vmware_storage_pod: 'DS-Cluster'
    vmware_folderpath: '/Datacenters/Home_DC/DC-HomeLab/vm/SERVERS/Sayan'
  tasks:

    - name: Collect Cluster ID
      command: bash -c "hammer --csv --no-headers compute-resource clusters --name {{ compute_resource_name }} | cut -d, -f1"
      register: cluster_id

    - debug: var=storage_pod_id.stdout

    - name: Create compute profile for satellite
      redhat.satellite.compute_profile:
        name: "{{ satellite_compute_profile }}"
        compute_attributes:
        - compute_resource: "{{ compute_resource_name }}"
          vm_attrs:
            cluster: "{{ vmware_cluster_name }}"
            path: "{{ vmware_folderpath }}"
            cpus: 1
            sockets: 1
            memory_mb: 2048
            memoryHotAddEnabled: 1
            cpuHotAddEnabled: 1
            guest_id: rhel7_64Guest
            volumes_attributes:
              0:
                size_gb: 15
                storage_pod: "{{ vmware_storage_pod }}"
                thin: 1
                eager_zero: false
            interfaces_attributes:
              0:
                type: "VirtualVmxnet3"
                network: "{{ compute_network }}"
        username: "admin"
        password: "RedHat1!"
        server_url: "https://satellite613.lab.example.com"
        validate_certs: true
        state: present


--> Repeat the playbook by changing the cluster input i.e. 

            cluster: "{{ vmware_cluster_name }}"
            cluster: "{{ cluster_id.stdout }}"
            cluster: "{{ vmware_cluster_full_path }}"

Actual results:

For --> cluster: "{{ vmware_cluster_name }}"

~~
fatal: [127.0.0.1]: FAILED! => changed=false 
  error:
    message: 'Internal Server Error: the server was unable to finish the request. This may be caused by unavailability of some required service, incorrect API call or a server-side bug. There may be more information in the server''s logs.'
  msg: 'Error while performing available_networks on compute_resources: 500 Server Error: Internal Server Error for url: https://satellite613.lab.example.com/api/compute_resources/2/available_clusters/Cluster-HomeLab/available_networks'
~~

~~
2023-05-31T16:53:34 [I|app|3ce5b385]   Rendered layout api/v2/layouts/index_layout.json.erb (Duration: 2.9ms | Allocations: 6915)
2023-05-31T16:53:34 [I|app|3ce5b385] Completed 200 OK in 6243ms (Views: 3.6ms | ActiveRecord: 0.3ms | Allocations: 53320)
2023-05-31T16:53:34 [I|app|73326310] Started GET "/api/compute_resources/2/available_clusters/Cluster-HomeLab/available_networks" for 192.168.125.4 at 2023-05-31 16:53:34 +0530
2023-05-31T16:53:34 [I|app|73326310] Processing by Api::V2::ComputeResourcesController#available_networks as JSON
2023-05-31T16:53:34 [I|app|73326310]   Parameters: {"apiv"=>"v2", "id"=>"2", "cluster_id"=>"Cluster-HomeLab"}
2023-05-31T16:53:40 [W|app|73326310] Action failed
2023-05-31T16:53:40 [I|app|73326310] Backtrace for 'Action failed' error (NoMethodError): undefined method `network' for nil:NilClass
 73326310 | /usr/share/gems/gems/fog-vsphere-3.6.0/lib/fog/vsphere/requests/compute/list_networks.rb:21:in `list_networks'
 73326310 | /usr/share/gems/gems/fog-vsphere-3.6.0/lib/fog/vsphere/models/compute/networks.rb:10:in `all'
 73326310 | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:152:in `block in networks'
 73326310 | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `instance_eval'
 73326310 | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `get_uncached_value'
 73326310 | /usr/share/foreman/app/services/compute_resource_cache.rb:22:in `cache'
 73326310 | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:151:in `networks'
 73326310 | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:173:in `available_networks'
 73326310 | /usr/share/foreman/app/controllers/api/v2/compute_resources_controller.rb:144:in `available_networks'
 73326310 | /usr/share/gems/gems/actionpack-6.1.7/lib/action_controller/metal/basic_implicit_render.rb:6:in `send_action'
 73326310 | /usr/share/gems/gems/actionpack-6.1.7/lib/abstract_controller/base.rb:228:in `process_action'
 73326310 | /usr/share/gems/gems/actionpack-6.1.7/lib/action_controller/metal/rendering.rb:30:in `process_action'
~~


For --> cluster: "{{ cluster_id.stdout }}"

~~
fatal: [127.0.0.1]: FAILED! => changed=false 
  error:
    message: 'Internal Server Error: the server was unable to finish the request. This may be caused by unavailability of some required service, incorrect API call or a server-side bug. There may be more information in the server''s logs.'
  msg: 'Error while performing available_networks on compute_resources: 500 Server Error: Internal Server Error for url: https://satellite613.lab.example.com/api/compute_resources/2/available_clusters/Cluster-HomeLab/available_networks'
~~

~~
==> /var/log/messages <==
May 31 16:55:11 satellite613 platform-python[2701]: ansible-redhat.satellite.compute_profile Invoked with name=Test_CP1 compute_attributes=[{'compute_resource': 'My_vCenter', 'vm_attrs': {'cluster': 'domain-c20', 'path': '/Datacenters/Home_DC/DC-HomeLab/vm/SERVERS/Sayan', 'cpus': 1, 'sockets': 1, 'memory_mb': 2048, 'memoryHotAddEnabled': 1, 'cpuHotAddEnabled': 1, 'guest_id': 'rhel7_64Guest', 'volumes_attributes': {'0': {'size_gb': 15, 'storage_pod': 'DS-Cluster', 'thin': 1, 'eager_zero': False}}, 'interfaces_attributes': {'0': {'type': 'VirtualVmxnet3', 'network': 'VM Network'}}}}] username=admin password=NOT_LOGGING_PARAMETER server_url=https://satellite613.lab.example.com validate_certs=True state=present updated_name=None

==> /var/log/foreman/production.log <==
2023-05-31T16:55:12 [I|app|a6f3cfc3] Started GET "/api/compute_resources/2/available_clusters/Cluster-HomeLab/available_storage_pods" for 192.168.125.4 at 2023-05-31 16:55:12 +0530
2023-05-31T16:55:12 [I|app|a6f3cfc3] Processing by Api::V2::ComputeResourcesController#available_storage_pods as JSON
2023-05-31T16:55:12 [I|app|a6f3cfc3]   Parameters: {"apiv"=>"v2", "id"=>"2", "cluster_id"=>"Cluster-HomeLab"}
2023-05-31T16:55:12 [I|app|a6f3cfc3]   Rendered api/v2/compute_resources/available_storage_pods.rabl within api/v2/layouts/index_layout (Duration: 2.4ms | Allocations: 6631)
2023-05-31T16:55:12 [I|app|a6f3cfc3]   Rendered layout api/v2/layouts/index_layout.json.erb (Duration: 4.0ms | Allocations: 13233)
2023-05-31T16:55:12 [I|app|a6f3cfc3] Completed 200 OK in 8ms (Views: 4.3ms | ActiveRecord: 0.7ms | Allocations: 15656)
2023-05-31T16:55:12 [I|app|fe4f2309] Started GET "/api/compute_resources/2/available_clusters/Cluster-HomeLab/available_networks" for 192.168.125.4 at 2023-05-31 16:55:12 +0530
2023-05-31T16:55:12 [I|app|fe4f2309] Processing by Api::V2::ComputeResourcesController#available_networks as JSON
2023-05-31T16:55:12 [I|app|fe4f2309]   Parameters: {"apiv"=>"v2", "id"=>"2", "cluster_id"=>"Cluster-HomeLab"}
2023-05-31T16:55:18 [W|app|fe4f2309] Action failed
2023-05-31T16:55:18 [I|app|fe4f2309] Backtrace for 'Action failed' error (NoMethodError): undefined method `network' for nil:NilClass
 fe4f2309 | /usr/share/gems/gems/fog-vsphere-3.6.0/lib/fog/vsphere/requests/compute/list_networks.rb:21:in `list_networks'
 fe4f2309 | /usr/share/gems/gems/fog-vsphere-3.6.0/lib/fog/vsphere/models/compute/networks.rb:10:in `all'
 fe4f2309 | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:152:in `block in networks'
 fe4f2309 | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `instance_eval'
 fe4f2309 | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `get_uncached_value'
 fe4f2309 | /usr/share/foreman/app/services/compute_resource_cache.rb:22:in `cache'
 fe4f2309 | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:151:in `networks'
 fe4f2309 | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:173:in `available_networks'
 fe4f2309 | /usr/share/foreman/app/controllers/api/v2/compute_resources_controller.rb:144:in `available_networks'
 fe4f2309 | /usr/share/gems/gems/actionpack-6.1.7/lib/action_controller/metal/basic_implicit_render.rb:6:in `send_action'
 fe4f2309 | /usr/share/gems/gems/actionpack-6.1.7/lib/abstract_controller/base.rb:228:in `process_action'
 fe4f2309 | /usr/share/gems/gems/actionpack-6.1.7/lib/action_controller/metal/rendering.rb:30:in `process_action'
 fe4f2309 | /usr/share/gems/gems/actionpack-6.1.7/lib/abstract_controller/callbacks.rb:42:in `block in process_action'
~~

Here even if we have used the ID of the cluster, It still maps that back to the name and fails to work. 


For --> cluster: "{{ vmware_cluster_full_path }}"

~~
TASK [Create compute profile for satellite] *******************************************************************************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => changed=false 
  msg: Could not find clusters 'PRODUCTION/Cluster-HomeLab' on compute resource 'My_vCenter'.

~~

Here Satellite can't even find out the cluster by it's full_path


Expected results:

One of the options should work and get the compute_profile created even if the cluster is nested. 


Additional info:

NA

Comment 1 Sayan Das 2023-05-31 12:05:42 UTC
NOTE: 

With --> cluster: "{{ vmware_cluster_name }}"

and

https://github.com/theforeman/foreman-ansible-modules/pull/1615 applied,

Now Network related error is gone but it fails to find the storage pod

TASK [Create compute profile for satellite] *******************************************************************************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => changed=false 
  error: |-
    <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>404 Not Found</title>
    </head><body>
    <h1>Not Found</h1>
    <p>The requested URL was not found on this server.</p>
    </body></html>
  msg: 'Error while performing available_storage_pods on compute_resources: 404 Client Error: Not Found for url: https://satellite613.lab.example.com/api/compute_resources/2/available_clusters/PRODUCTION%2FCluster-HomeLab/available_storage_pods'



Manual Curl test:

# curl -ku admin:RedHat1! "https://satellite613.lab.example.com/api/compute_resources/2/available_clusters/PRODUCTION%2FCluster-HomeLab/available_storage_pods"
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL was not found on this server.</p>
</body></html>



# curl -ku admin:RedHat1! "https://satellite613.lab.example.com/api/compute_resources/2/available_clusters/PRODUCTION\Cluster-HomeLab/available_storage_pods"
{
  "total": 1,
  "subtotal": 1,
  "page": 1,
  "per_page": 20,
  "search": null,
  "sort": {
    "by": null,
    "order": null
  },
  "results": [{"name":"DS-Cluster","id":"group-p1002","capacity":115158810624,"freespace":65284341760,"datacenter":"Home_DC/DC-HomeLab"}]
}


I believe the same issue would happen for storage_domains as well.

Comment 3 Sayan Das 2023-05-31 13:00:53 UTC
Information based on the investigation with engineering team on my setup:

Via https://github.com/theforeman/foreman/pull/9383/files and https://github.com/theforeman/hammer-cli-foreman/pull/604/files we got the hammer working with nested-cluster in BZ https://bugzilla.redhat.com/show_bug.cgi?id=1994945 

So due to https://github.com/theforeman/foreman/pull/9383/files#diff-4c9e5d9690c2678a2aa9086871b8b0057f0dcbd191112127dfb2b7379020d0dfR298-R300

PRODUCTION/Cluster-HomeLab is called as PRODUCTION%252FCluster-HomeLab  and that double escaping works for the API


But when FAM\SAM calls the same endpoint directly via REST API

PRODUCTION/Cluster-HomeLab is called as PRODUCTION%2FCluster-HomeLab and the call fails.

Comment 4 Sayan Das 2023-05-31 13:28:04 UTC
// UPDATE //

The new patch now handles the urlescape issue for API 

https://github.com/theforeman/foreman-ansible-modules/pull/1615/files

But even if all the GET calls working fine ( to retrieve the necessary details )  now, The POST request to create the CP fails 

~~
TASK [Create compute profile for satellite] ************************************************************************************************│·················································································································
fatal: [127.0.0.1]: FAILED! => changed=false                                                                                                │·················································································································
  error:                                                                                                                                    │·················································································································
    message: Fog::Vsphere::Compute::NotFound                                                                                                │·················································································································
  msg: 'Error while performing create on compute_attributes: 500 Server Error: Internal Server Error for url: https://satellite613.lab.examp│·················································································································
le.com/api/compute_profiles/11/compute_resources/2/compute_attributes'                                                                      │·················································································································
                                                                                                                                            │·················································································································
PLAY RECAP *********************************************************************************************************************************│·················································································································
127.0.0.1                  : ok=2    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0    
~~


2023-05-31T18:44:11 [I|app|f7b40e0d] Started POST "/api/compute_profiles/11/compute_resources/2/compute_attributes" for 192.168.125.4 at 2023-05-31 18:44:11 +0530
2023-05-31T18:44:11 [I|app|f7b40e0d] Processing by Api::V2::ComputeAttributesController#create as JSON
2023-05-31T18:44:11 [I|app|f7b40e0d]   Parameters: {"compute_attribute"=>{"vm_attrs"=>{"cluster"=>"PRODUCTION%2FCluster-HomeLab", "path"=>"/Datacenters/Home_DC/DC-HomeLab/vm/SERVERS/Sayan", "cpus"=>1, "sockets"=>1, "memory_mb"=>204
8, "memoryHotAddEnabled"=>1, "cpuHotAddEnabled"=>1, "guest_id"=>"rhel7_64Guest", "volumes_attributes"=>{"0"=>{"size_gb"=>15, "storage_pod"=>"group-p1002", "thin"=>1, "eager_zero"=>false}}, "interfaces_attributes"=>{"0"=>{"type"=>"V
irtualVmxnet3", "network"=>"network-14"}}}}, "apiv"=>"v2", "compute_profile_id"=>"11", "compute_resource_id"=>"2"}
2023-05-31T18:44:17 [W|app|f7b40e0d] Action failed
2023-05-31T18:44:17 [I|app|f7b40e0d] Backtrace for 'Action failed' error (Fog::Vsphere::Compute::NotFound): Fog::Vsphere::Compute::NotFound
 f7b40e0d | /usr/share/gems/gems/fog-vsphere-3.6.0/lib/fog/vsphere/requests/compute/get_cluster.rb:7:in `get_cluster'
 f7b40e0d | /usr/share/gems/gems/fog-vsphere-3.6.0/lib/fog/vsphere/models/compute/clusters.rb:15:in `get'
 f7b40e0d | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:87:in `block in cluster'
 f7b40e0d | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `instance_eval'
 f7b40e0d | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `get_uncached_value'
 f7b40e0d | /usr/share/foreman/app/services/compute_resource_cache.rb:22:in `cache'
 f7b40e0d | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:86:in `cluster'
 f7b40e0d | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:158:in `block in resource_pools'
 f7b40e0d | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `instance_eval'
 f7b40e0d | /usr/share/foreman/app/services/compute_resource_cache.rb:68:in `get_uncached_value'
 f7b40e0d | /usr/share/foreman/app/services/compute_resource_cache.rb:22:in `cache'
 f7b40e0d | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:157:in `resource_pools'
 f7b40e0d | /usr/share/foreman/app/models/compute_resources/foreman/model/vmware.rb:680:in `normalize_vm_attrs'



Reason:

~~
{"vm_attrs"=>{"cluster"=>"PRODUCTION%2FCluster-HomeLab", "path"=>"/Datacenters/Home_DC/DC-HomeLab/vm/SERVERS/Sayan",
~~

We should not escape the / in cluster value when that is used as payload to create the CP.

Comment 5 Sayan Das 2023-05-31 15:54:45 UTC
/ UPDATE /

With the latest changes introduced in the same PR, Now the CP creation is successful:


2023-05-31T21:22:49 [I|app|3b9a92da]   Rendered api/v2/compute_resources/show.json.rabl (Duration: 6568.2ms | Allocations: 299555)
2023-05-31T21:22:49 [I|app|3b9a92da] Completed 200 OK in 6573ms (Views: 6565.6ms | ActiveRecord: 4.3ms | Allocations: 304192)
2023-05-31T21:22:49 [I|app|32a63f77] Started GET "/api/compute_resources/2/available_clusters" for 192.168.125.4 at 2023-05-31 21:22:49 +0530
2023-05-31T21:22:49 [I|app|32a63f77] Processing by Api::V2::ComputeResourcesController#available_clusters as JSON
2023-05-31T21:22:49 [I|app|32a63f77]   Parameters: {"apiv"=>"v2", "id"=>"2"}
2023-05-31T21:22:49 [I|app|32a63f77]   Rendered api/v2/compute_resources/available_clusters.rabl within api/v2/layouts/index_layout (Duration: 0.4ms | Allocations: 337)
2023-05-31T21:22:49 [I|app|32a63f77]   Rendered layout api/v2/layouts/index_layout.json.erb (Duration: 2.6ms | Allocations: 6899)
2023-05-31T21:22:49 [I|app|32a63f77] Completed 200 OK in 6ms (Views: 3.1ms | ActiveRecord: 0.4ms | Allocations: 9444)
2023-05-31T21:22:49 [I|app|2b0d547b] Started GET "/api/compute_resources/2/available_clusters/PRODUCTION%252FCluster-HomeLab/available_storage_pods" for 192.168.125.4 at 2023-05-31 21:22:49 +0530
2023-05-31T21:22:49 [I|app|2b0d547b] Processing by Api::V2::ComputeResourcesController#available_storage_pods as JSON
2023-05-31T21:22:49 [I|app|2b0d547b]   Parameters: {"apiv"=>"v2", "id"=>"2", "cluster_id"=>"PRODUCTION%2FCluster-HomeLab"}
2023-05-31T21:22:55 [I|app|2b0d547b] Loaded compute resource data for storage_pods-PRODUCTION/Cluster-HomeLab in 6.202948726 seconds
2023-05-31T21:22:55 [I|app|2b0d547b]   Rendered api/v2/compute_resources/available_storage_pods.rabl within api/v2/layouts/index_layout (Duration: 0.4ms | Allocations: 336)
2023-05-31T21:22:55 [I|app|2b0d547b]   Rendered layout api/v2/layouts/index_layout.json.erb (Duration: 2.6ms | Allocations: 6897)
2023-05-31T21:22:55 [I|app|2b0d547b] Completed 200 OK in 6210ms (Views: 3.4ms | ActiveRecord: 0.5ms | Allocations: 53302)
2023-05-31T21:22:55 [I|app|64b1bb94] Started GET "/api/compute_resources/2/available_clusters/PRODUCTION%252FCluster-HomeLab/available_networks" for 192.168.125.4 at 2023-05-31 21:22:55 +0530
2023-05-31T21:22:55 [I|app|64b1bb94] Processing by Api::V2::ComputeResourcesController#available_networks as JSON
2023-05-31T21:22:55 [I|app|64b1bb94]   Parameters: {"apiv"=>"v2", "id"=>"2", "cluster_id"=>"PRODUCTION%2FCluster-HomeLab"}
2023-05-31T21:23:02 [I|app|64b1bb94] Loaded compute resource data for networks-PRODUCTION/Cluster-HomeLab in 6.208789635 seconds
2023-05-31T21:23:02 [I|app|64b1bb94]   Rendered api/v2/compute_resources/available_networks.rabl within api/v2/layouts/index_layout (Duration: 0.4ms | Allocations: 337)
2023-05-31T21:23:02 [I|app|64b1bb94]   Rendered layout api/v2/layouts/index_layout.json.erb (Duration: 2.6ms | Allocations: 6899)
2023-05-31T21:23:02 [I|app|64b1bb94] Completed 200 OK in 6215ms (Views: 3.3ms | ActiveRecord: 0.4ms | Allocations: 72666)
2023-05-31T21:23:02 [I|app|ffc64332] Started POST "/api/compute_profiles/12/compute_resources/2/compute_attributes" for 192.168.125.4 at 2023-05-31 21:23:02 +0530
2023-05-31T21:23:02 [I|app|ffc64332] Processing by Api::V2::ComputeAttributesController#create as JSON
2023-05-31T21:23:02 [I|app|ffc64332]   Parameters: {"compute_attribute"=>{"vm_attrs"=>{"cluster"=>"PRODUCTION/Cluster-HomeLab", "path"=>"/Datacenters/Home_DC/DC-HomeLab/vm/SERVERS/Sayan", "cpus"=>1, "sockets"=>1, "memory_mb"=>2048, "memoryHotAddEnabled"=>1, "cpuHotAddEnabled"=>1, "guest_id"=>"rhel7_64Guest", "volumes_attributes"=>{"0"=>{"size_gb"=>15, "storage_pod"=>"group-p1002", "thin"=>1, "eager_zero"=>false}}, "interfaces_attributes"=>{"0"=>{"type"=>"VirtualVmxnet3", "network"=>"network-14"}}}}, "apiv"=>"v2", "compute_profile_id"=>"12", "compute_resource_id"=>"2"}
2023-05-31T21:23:08 [I|aud|ffc64332] ComputeAttribute (2) create event on compute_profile_id 12
2023-05-31T21:23:08 [I|aud|ffc64332] ComputeAttribute (2) create event on compute_resource_id 2
2023-05-31T21:23:08 [I|aud|ffc64332] ComputeAttribute (2) create event on name 1 CPUs and 2048 MB memory
2023-05-31T21:23:08 [I|aud|ffc64332] ComputeAttribute (2) create event on vm_attrs {"cluster"=>"PRODUCTION/Cluster-HomeLab", "path"=>"/Datacenters/Home_DC/DC-HomeLab/vm/SERVERS/Sayan", "cpus"=>1, "sockets"=>1, "memory_mb"=>2048, "memoryHotAddEnabled"=>1, "cpuHotAddEnabled"=>1, "guest_id"=>"rhel7_64Guest", "volumes_attributes"=>{"0"=>{"size_gb"=>15, "storage_pod"=>"group-p1002", "thin"=>1, "eager_zero"=>false}}, "interfaces_attributes"=>{"0"=>{"type"=>"VirtualVmxnet3", "network"=>"network-14"}}}
2023-05-31T21:23:08 [I|app|ffc64332]   Rendered api/v2/compute_attributes/create.json.rabl (Duration: 8.0ms | Allocations: 61562)
2023-05-31T21:23:08 [I|app|ffc64332] Completed 201 Created in 6218ms (Views: 9.9ms | ActiveRecord: 3.7ms | Allocations: 141139)



I confirm that https://github.com/theforeman/foreman-ansible-modules/pull/1615.patch fixes the issue.

Comment 6 Griffin Sullivan 2023-07-31 15:01:12 UTC
Verified in 6.14 snap 9

Satellite ansible collection can create a compute profile for a compute resource with a nested cluster in VMware

Steps to Reproduce:

1) Setup the nested cluster

2) Create a compute resource for the cluster from step 1

3) Run the playbook in the description of the BZ and edit values as necessary for your cluster


Results:

Playbook runs successfully for each changing of the nested cluster name.

Comment 9 errata-xmlrpc 2023-11-08 14:19:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Satellite 6.14 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6818


Note You need to log in before you can comment on or make changes to this bug.