Bug 1740283
Summary: | OC deployment with multiple subnets under default networks fails with: msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24'" | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Alexander Chuzhoy <sasha> |
Component: | Ceph-Ansible | Assignee: | Harald Jensås <hjensas> |
Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 4.0 | CC: | aschoen, ceph-eng-bugs, ceph-qe-bugs, dsavinea, gcharot, gfidente, gmeno, hgurav, hjensas, johfulto, mburns, nthomas, pgrist, tserlin, vashastr |
Target Milestone: | rc | ||
Target Release: | 4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-ansible-4.0.2-1.el8cp | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-01-31 12:46:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1594251, 1601576 |
Description
Alexander Chuzhoy
2019-08-12 14:54:31 UTC
Log: /var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log 2019-08-09 23:43:45,235 p=341466 u=root | TASK [ceph-facts : set_fact _monitor_address to monitor_address_block ipv4] **** 2019-08-09 23:43:45,235 p=341466 u=root | task path: /usr/share/ceph-ansible/roles/ceph-facts/tasks/set_monitor_address.yml:2 2019-08-09 23:43:45,235 p=341466 u=root | Friday 09 August 2019 23:43:45 +0000 (0:00:00.921) 0:01:17.061 ********* 2019-08-09 23:43:45,487 p=341466 u=root | fatal: [overcloud-controller-0]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:45,543 p=341466 u=root | fatal: [overcloud-controller-1]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:45,753 p=341466 u=root | fatal: [overcloud-controller-2]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:45,852 p=341466 u=root | fatal: [overcloud-cephstorage1-0]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:45,911 p=341466 u=root | fatal: [overcloud-cephstorage1-1]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:45,971 p=341466 u=root | fatal: [overcloud-cephstorage2-0]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,030 p=341466 u=root | fatal: [overcloud-cephstorage2-1]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,089 p=341466 u=root | fatal: [overcloud-cephstorage3-0]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,157 p=341466 u=root | fatal: [overcloud-cephstorage3-1]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,158 p=341466 u=root | fatal: [overcloud-novacompute1-0]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,163 p=341466 u=root | fatal: [overcloud-novacompute1-1]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,269 p=341466 u=root | fatal: [overcloud-novacompute2-0]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,283 p=341466 u=root | fatal: [overcloud-novacompute2-1]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,348 p=341466 u=root | fatal: [overcloud-novacompute3-0]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' 2019-08-09 23:43:46,369 p=341466 u=root | fatal: [overcloud-novacompute3-1]: FAILED! => msg: 'ipaddr: unknown filter type: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' Code: /usr/share/ceph-ansible/roles/ceph-facts/tasks/set_monitor_address.yml - name: set_fact _monitor_address to monitor_address_block ipv4 set_fact: _monitor_addresses: "{{ _monitor_addresses | default([]) + [{ 'name': item, 'addr': hostvars[item]['ansible_all_ipv4_addresses'] | ipaddr(hostvars[item]['monitor_address_block']) | first }] }}" with_items: "{{ groups.get(mon_group_name, []) }}" when: - "item not in _monitor_addresses | default([]) | selectattr('name', 'defined') | map(attribute='name') | list" - hostvars[item]['monitor_address_block'] is defined - hostvars[item]['monitor_address_block'] != 'subnet' - ip_version == 'ipv4' When it is setting the _monitor_addresses fact, ipaddr expects monitor_address_block to be a single IP subnet CIDR not a list of CIDR's. With the proposed ceph-ansible patch[1] the overcloud deployment using Mr. Chuzhoy's OSP director templates result in a successful Ceph deployment: bash-4.4# ceph status cluster: id: 356c63c2-becf-11e9-b09c-525400abc84f health: HEALTH_WARN too few PGs per OSD (2 < min 30) services: mon: 3 daemons, quorum overcloud-controller-2,overcloud-controller-1,overcloud-controller-0 (age 13h) mgr: overcloud-controller-1(active, since 13h), standbys: overcloud-controller-0, overcloud-controller-2 osd: 30 osds: 30 up (since 13h), 30 in (since 13h) data: pools: 5 pools, 80 pgs objects: 0 objects, 0 B usage: 30 GiB used, 270 GiB / 300 GiB avail pgs: 80 active+clean bash-4.4# cat /etc/ceph/ceph.conf # Please do not change this file directly since it is managed by Ansible and will be overwritten [global] cluster network = 172.120.4.0/24,172.117.4.0/24,172.118.4.0/24,172.119.4.0/24 fsid = 356c63c2-becf-11e9-b09c-525400abc84f mon host = [v2:172.120.3.147:3300,v1:172.120.3.147:6789],[v2:172.120.3.97:3300,v1:172.120.3.97:6789],[v2:172.120.3.92:3300,v1:172.120.3.92:6789] mon initial members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2 osd pool default crush rule = -1 osd_pool_default_pg_num = 16 osd_pool_default_pgp_num = 16 osd_pool_default_size = 1 public network = 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24 rgw_keystone_accepted_admin_roles = ResellerAdmin rgw_keystone_accepted_roles = Member, admin rgw_keystone_admin_domain = default rgw_keystone_admin_password = a0mBGm0M3MmVB0Uwa8TkNvv2N rgw_keystone_admin_project = service rgw_keystone_admin_user = swift rgw_keystone_api_version = 3 rgw_keystone_implicit_tenants = true rgw_keystone_revocation_interval = 0 rgw_keystone_url = http://172.120.1.171:5000 rgw_s3_auth_use_keystone = true rgw_swift_account_in_url = true rgw_swift_versioning_enabled = true osd_memory_target = 4242538496 osd_memory_base = 2147483648 osd_memory_cache_min = 3195011072 [1] https://github.com/ceph/ceph-ansible/pull/4339 To verify this bug, please set multiple network in the public_network parameter in group_vars/all.yml file as so: public_network: 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0312 |