Bug 1635304 - [RFE] Nested hostgroups from Foreman are shown as flat in RHV
Summary: [RFE] Nested hostgroups from Foreman are shown as flat in RHV
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.2.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ovirt-4.3.0
: 4.3.0
Assignee: Moti Asayag
QA Contact: Petr Kubica
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-02 14:52 UTC by Maxim Burgerhout
Modified: 2019-05-16 07:42 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
This release allows users in Red Hat Virtualization Manager to view the full path of the host group in the host group drop-down list to facilitate host group configuration.
Clone Of:
Environment:
Last Closed: 2019-05-08 12:38:40 UTC
oVirt Team: Infra
Target Upstream Version:
Embargoed:
pkubica: testing_plan_complete-


Attachments (Terms of Use)
Screenshot of problem in RHV (152.03 KB, image/png)
2018-10-02 14:52 UTC, Maxim Burgerhout
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:1085 0 None None None 2019-05-08 12:38:53 UTC
oVirt gerrit 95280 0 None None None 2018-11-08 13:44:08 UTC

Description Maxim Burgerhout 2018-10-02 14:52:22 UTC
Created attachment 1489490 [details]
Screenshot of problem in RHV

Description of problem:
RHEV shows Foreman / Satellite nested hostgroups as flat right now. This makes it harder, for example, to distinguish between different nested hostgroups with similar names:

library/rhel7/rhev
production/rhel7/rhev

when displayed flat, which nested 'rhev' hostgroup do I pick?

Version-Release number of selected component (if applicable):
RHV 4.2.6
Sat6 6.3.3

How reproducible:
Use Foreman host provisioning through RHEV

Steps to Reproduce:
1. Setup RHV and Sat6
2. Setup Foreman / Sat6 external provider
3. Try to pick correct nested hostgroup for new host

Actual results:
Nested hostgroups are shown flat

Expected results:
Nested hostgroups are shown nested

Additional info:
Was opened and closed for RHEV 3.5 in 2015 as #1223578

Comment 1 Maxim Burgerhout 2018-10-02 15:00:07 UTC
Also, it turns out only the first 20 hostgroups are shown. Pagination maybe?

Comment 2 Martin Perina 2018-10-22 13:12:05 UTC
RHV always displayed as flag (only name of the host group), but we swtich to title instead of name, which contain host hierarchy. For example:

       name        | ancestry |                    title                      
--------------------+----------+----------------------------------------------
testgroup          |          | testgroup
sub_testgroup      | 1        | testgroup/sub_testgroup
testgroup2         |          | testgroup2
sub_testgroup2     | 3        | testgroup2/sub_testgroup2
sub_sub_testgroup2 | 3/4      | testgroup2/sub_testgroup2/sub_sub_testgroup2

So changing this to RFE

Comment 3 Moti Asayag 2018-11-01 08:20:18 UTC
(In reply to Maxim Burgerhout from comment #1)
> Also, it turns out only the first 20 hostgroups are shown. Pagination maybe?

Seems like it. The default api call to foreman for hostgroups returns as part of the request the details of the call:

$ curl -s  -k -u admin:miFWnEsYMXD5iZPP http://node01:5000/api/v2/hostgroups | python -m json.tool
{
    "page": 1,
    "per_page": 20,
    "results": [
        {
          ...
        }
     ]
}

This seems to be common for other foreman resources fetched by ovirt-engine (except for Katello where pagination was implemented).

I'd suggest to open a dedicate bug for it, since it expand beyond host group to any other foreman resource.

Note for the develop that will fix the limited response size:
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/host/provider/foreman/ForemanHostProviderProxy.java#L44

Calls to foreman are done without 'per_page=99999' to simulate unlimited page size (all entries in one call) therefore the queries/urls should be changed to include it, i.e.:

http://node01:5000/api/v2/hostgroups?per_page=99999
and the response will reflect it:
{
    "page": 1,
    "per_page": 99999,
    "results": [
        {
        }
     ]
}

Comment 4 Red Hat Bugzilla Rules Engine 2018-11-01 08:42:02 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 6 Moti Asayag 2018-11-01 08:48:39 UTC
(In reply to Moti Asayag from comment #3)
> (In reply to Maxim Burgerhout from comment #1)
> > Also, it turns out only the first 20 hostgroups are shown. Pagination maybe?
> 
> Seems like it. The default api call to foreman for hostgroups returns as
> part of the request the details of the call:
> 
> $ curl -s  -k -u admin:miFWnEsYMXD5iZPP http://node01:5000/api/v2/hostgroups
> | python -m json.tool
> {
>     "page": 1,
>     "per_page": 20,
>     "results": [
>         {
>           ...
>         }
>      ]
> }
> 
> This seems to be common for other foreman resources fetched by ovirt-engine
> (except for Katello where pagination was implemented).
> 
> I'd suggest to open a dedicate bug for it, since it expand beyond host group
> to any other foreman resource.
> 

Opened a new bug for pagination: Bug 1645007

> Note for the develop that will fix the limited response size:
> https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/
> bll/src/main/java/org/ovirt/engine/core/bll/host/provider/foreman/
> ForemanHostProviderProxy.java#L44
> 
> Calls to foreman are done without 'per_page=99999' to simulate unlimited
> page size (all entries in one call) therefore the queries/urls should be
> changed to include it, i.e.:
> 
> http://node01:5000/api/v2/hostgroups?per_page=99999
> and the response will reflect it:
> {
>     "page": 1,
>     "per_page": 99999,
>     "results": [
>         {
>         }
>      ]
> }

Comment 8 Petr Kubica 2019-01-30 09:53:30 UTC
Verified in ovirt-engine-4.3.0.2-0.1.el7.noarch

Comment 10 errata-xmlrpc 2019-05-08 12:38:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085


Note You need to log in before you can comment on or make changes to this bug.