Bug 1963066 - filestore to bluestore migration fails due to undesired filter match
Summary: filestore to bluestore migration fails due to undesired filter match
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.2z2
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-21 10:53 UTC by broskos
Modified: 2021-06-15 17:14 UTC (History)
8 users (show)

Fixed In Version: ceph-ansible-4.0.56-1.el8cp, ceph-ansible-4.0.56-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-15 17:14:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 6551 0 None open fs2bs: fix wrong filter when setting osd_ids 2021-05-25 14:07:35 UTC
Red Hat Product Errata RHSA-2021:2445 0 None None None 2021-06-15 17:14:35 UTC

Description broskos 2021-05-21 10:53:03 UTC
Description of problem:
filestore to bluestore playbook fails due to undesired filter match.

when selecting OSDs to be purged, the list of OSDs returned contains OSDs for other than the target host, these osds are still up and running and so the purge block fails.

-----------------------------------------
        - name: set_fact osd_ids
          set_fact:
            osd_ids: "{{ osd_ids | default([]) + [item] }}"
          with_items:
            - "{{ ((osd_tree.stdout | default('{}') | from_json).nodes | selectattr('name', 'match', inventory_hostname) | map(attribute='children') | list) }}"

        - name: purge osd(s) from the cluster
          ceph_osd:
            ids: "{{ item }}"
            cluster: "{{ cluster }}"
            state: purge
          environment:
            CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
            CEPH_CONTAINER_BINARY: "{{ container_binary }}"
          run_once: true
          delegate_to: "{{ groups[mon_group_name][0] }}"
          with_items: "{{ osd_ids }}"
---------------------------------------

The problem is in set_fact_osd_ids inside the with_items loop.  
This snip: selectattr('name', 'match', inventory_hostname)

returns matches for inventory hostnames other than the current target.  
Specifically when limiting the playbook to run on host "computehci-1", the match statement above will return OSDs from:
computehci-1
computehci-10
computehci-11

(and so on if the cluster is large enough)

The desired behavior is to only return the list of OSDs on the target host "computehci-1".

We don't want a match filter here, we want an equalto filter.  I changed the filter as follows:
selectattr('name', 'equalto', inventory_hostname)

and now the playbook runs as expected.


How reproducible:
This will occur whenever hostnames are similar as described above.

Comment 9 errata-xmlrpc 2021-06-15 17:14:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2445


Note You need to log in before you can comment on or make changes to this bug.