Bug 1838144 - Error while adding an host using ansible-roles infra
Summary: Error while adding an host using ansible-roles infra
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-ansible-roles
Version: 4.3.7
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ovirt-4.4.2
: ---
Assignee: Martin Necas
QA Contact: Jan Zmeskal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-20 14:23 UTC by Luca
Modified: 2023-10-06 20:09 UTC (History)
6 users (show)

Fixed In Version: ovirt-ansible-infra-1.2.2-1
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-23 16:15:11 UTC
oVirt Team: Infra
Target Upstream Version:
Embargoed:
pbrilla: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt ovirt-ansible-infra pull 75 0 None closed fix host missing required networks 2020-12-04 11:39:02 UTC
Red Hat Product Errata RHBA-2020:3820 0 None None None 2020-09-23 16:15:31 UTC

Description Luca 2020-05-20 14:23:41 UTC
Description of problem:

We have a cluster with some logical network defined as required.

When we run a playbook (with ovirt-ansible-roles) to add a new host that it will configure those networks (we include all variables that define the configuration of those network on the host), we get this error:

[\"Host node2.ovirt2 does not comply with the cluster clrhvbg001 networks, the following networks are missing on host: 'Test1,Test2'\"]


Version-Release number of selected component (if applicable):

- ovirt manager 4.3.9.4-1.el7
- rhev manager 4.3.7.2

How reproducible:


Steps to Reproduce:
1. Define in a datacenter some logical networks flagged as required and add an host that will use those logical networks
2. Define in a vars file another host that will use the logical networks defined as required.
3. Run the playbook using the ansible roles

Actual results:

[\"Host node2.ovirt2 does not comply with the cluster clrhvbg001 networks, the following networks are missing on host: 'Test1,Test2'\"]

Expected results:

No error

Comment 1 Michael Burman 2020-05-20 16:48:47 UTC
This is an expected warning. Letting the admin know he has 'required' networks in the cluster which are not attached to the new added host.
You can uncheck the 'required' from this networks in the cluster or attach the networks to the new added host.
If network marked as required in a cluster, it is expected to be attached to all hosts in this cluster, if not, you will see the warnings.

Comment 2 Luca 2020-06-05 16:50:08 UTC
Yeah, I know that, but the ansible-roles get those errors before configuring the host_networks.

If I run the playbook without the logical_networks variables (already configured) we don't get any errors, all logical_network are assigned without any problems.

If I run the playbook with the logical_networks variables (already configured) we get the error before configuring the host_network.

Comment 3 Martin Necas 2020-06-16 14:31:49 UTC
I was not able to reproduce it but got different errors which I will look into. 
Could you please provide the playbook variables?
I have missed something probably.

Comment 4 Martin Perina 2020-06-29 11:18:48 UTC
Feel free to reopen if reproduced and provide requested information

Comment 5 Luca 2020-07-02 11:52:47 UTC
Sorry for the late response.

This is a var file with the definition of the logical networks:

logical_networks:
  - name: "Test2"
    clusters:
      - name: "clrhvbg001"
        assigned: yes 
        required: yes
        display: no
        migration: no 
        gluster: no
    vlan_tag: 666
    vm_network: True
  

  - name: Test1
    clusters:
      - name: "clrhvbg001"
        assigned: yes
        required: yes
        display: no
        migration: no 
        gluster: no
    vlan_tag: 12
    vm_network: True


  - name: fcoe1
    clusters:
      - name: "clrhvbg001"
        assigned: yes
        required: no
        display: no
        migration: no
        gluster: no
    vm_network: False 

  - name: fcoe2
    clusters:
      - name: "clrhvbg001"
        assigned: yes
        required: no
        display: no
        migration: no
        gluster: no
    vm_network: False 

This is a var file with the configuration needed by the host, I will assign each logical network to the needed interfaces (so, no logical network marked required without an interface)

host_networks:
  - name: "ovirthost.local"        
    save: yes
    bond:                           
        name: "bond1"               
        mode: "1"                   
        interfaces:                 
                - "ens3"            
                - "eth0"           
    networks:                       
       - name: "ovirtmgmt"         
         boot_protocol: "static"   
         address: "192.168.122.197"  
         netmask: "255.255.255.0"
         gateway: "192.168.122.1"
       
       - name: "Test1"
         boot_protocol: "none"

       - name: "Test2"
         boot_protocol: "none"

  - name: "ovirthost.local"
    save: yes
    interface: eth1
    networks:
      - name: "fcoe1"
        boot_protocol: none
        custom_properties:
          - name: fcoe
            value: enable=yes,dcb=no,auto_vlan=yes

  - name: "ovirthost.local"
    save: yes
    interface: eth2
    networks:
      - name: "fcoe2"
        boot_protocol: none
        custom_properties:
          - name: fcoe
            value: enable=yes,dcb=no,auto_vlan=yes

The problem is that, if I run a playbook using those variables and there are no host and no logical network configured I don't get any error. If I run again but I add another host (same logical networks, I assign every logical network marked as required) I get the error reported above. If I want to bypass the error, I should not use the var file in which I define the logical networks.
I get the same error also if I run again the playbook to add the first host.

If needed I'll provide more information.

Comment 6 Luca 2020-07-02 11:55:15 UTC
(In reply to Luca from comment #5)
> Sorry for the late response.
> 
> This is a var file with the definition of the logical networks:
> 
> logical_networks:
>   - name: "Test2"
>     clusters:
>       - name: "clrhvbg001"
>         assigned: yes 
>         required: yes
>         display: no
>         migration: no 
>         gluster: no
>     vlan_tag: 666
>     vm_network: True
>   
> 
>   - name: Test1
>     clusters:
>       - name: "clrhvbg001"
>         assigned: yes
>         required: yes
>         display: no
>         migration: no 
>         gluster: no
>     vlan_tag: 12
>     vm_network: True
> 
> 
>   - name: fcoe1
>     clusters:
>       - name: "clrhvbg001"
>         assigned: yes
>         required: no
>         display: no
>         migration: no
>         gluster: no
>     vm_network: False 
> 
>   - name: fcoe2
>     clusters:
>       - name: "clrhvbg001"
>         assigned: yes
>         required: no
>         display: no
>         migration: no
>         gluster: no
>     vm_network: False 
> 
> This is a var file with the configuration needed by the host, I will assign
> each logical network to the needed interfaces (so, no logical network marked
> required without an interface)
> 
> host_networks:
>   - name: "ovirthost.local"        
>     save: yes
>     bond:                           
>         name: "bond1"               
>         mode: "1"                   
>         interfaces:                 
>                 - "ens3"            
>                 - "eth0"           
>     networks:                       
>        - name: "ovirtmgmt"         
>          boot_protocol: "static"   
>          address: "192.168.122.197"  
>          netmask: "255.255.255.0"
>          gateway: "192.168.122.1"
>        
>        - name: "Test1"
>          boot_protocol: "none"
> 
>        - name: "Test2"
>          boot_protocol: "none"
> 
>   - name: "ovirthost.local"
>     save: yes
>     interface: eth1
>     networks:
>       - name: "fcoe1"
>         boot_protocol: none
>         custom_properties:
>           - name: fcoe
>             value: enable=yes,dcb=no,auto_vlan=yes
> 
>   - name: "ovirthost.local"
>     save: yes
>     interface: eth2
>     networks:
>       - name: "fcoe2"
>         boot_protocol: none
>         custom_properties:
>           - name: fcoe
>             value: enable=yes,dcb=no,auto_vlan=yes
> 
> The problem is that, if I run a playbook using those variables and there are
> no host and no logical network configured I don't get any error. If I run
> again but I add another host (same logical networks, I assign every logical
> network marked as required) I get the error reported above. If I want to
> bypass the error, I should not use the var file in which I define the
> logical networks.
> I get the same error also if I run again the playbook to add the first host.
> 
> If needed I'll provide more information.

And I did not see the "add an attachment" to the top of the page, sorry.

I'll upload more files there in needed.

Comment 7 RHEL Program Management 2020-07-08 21:10:11 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 10 Jan Zmeskal 2020-08-26 14:00:38 UTC
Verified with: ovirt-ansible-infra-1.2.2

Verification steps:
1. Have engine with one host in your data center
2. Create a new logical network in your data center and set it as required in your cluster
3. Attach this network to one of your first host's interfaces
4. Prepare playbook like this (adding second host that will also have newly created network attached to the same network interface):

- name: Set up new host with required network
  hosts: my_engine
  vars:
    rhv_cluster_name: my_cluster
    host_password: <censored>
    rhv_host_network_interface: "eno2"
    host_1_name: host_1
    host_2_name: host_2
    rhv_network_name: test_network_1

  roles:
    - role: ovirt.infra
      vars:
        data_center_name: my_data_center
        compatibility_version: "4.3"
        hosts:
          - name: "{{ host_1_name }}"
            state: present
            address: <censored>
            password: "{{ host_password }}"
            cluster: "{{ rhv_cluster_name }}"
          - name: "{{ host_2_name }}"
            state: present
            address: <censored>
            password: "{{ host_password }}"
            cluster: "{{ rhv_cluster_name }}"
        logical_networks:
          - name: "{{ rhv_network_name }}"
            clusters:
              - name: "{{ rhv_cluster_name }}"
                asssigned: true
                required: true
            vm_network: true
        host_networks:
          - name: "{{ host_1_name }}"
            state: present
            check: true
            save: true
            interface: "{{ rhv_host_network_interface }}"
            networks:
              - name: "{{ rhv_network_name }}"
          - name: "{{ host_2_name }}"
            state: present
            check: true
            save: true
            interface: "{{ rhv_host_network_interface }}"
            networks:
              - name: "{{ rhv_network_name }}"

5. Run the playbook

Result: Playbook finished with no errors. The second host was added and newly created, required logical network was attached to its network interface.

Comment 14 errata-xmlrpc 2020-09-23 16:15:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHV Engine and Host Common Packages 4.4.z [ovirt-4.4.2]), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3820


Note You need to log in before you can comment on or make changes to this bug.