Bug 1869837 - Unable to remove mds using shrink-mds.yml
Summary: Unable to remove mds using shrink-mds.yml
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: z2
: 4.1
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-18 18:37 UTC by Shrivaibavi Raghaventhiran
Modified: 2020-09-30 17:27 UTC (History)
9 users (show)

Fixed In Version: ceph-ansible-4.0.29-1.el8cp, ceph-ansible-4.0.29-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-30 17:26:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5683 0 None closed [skip ci] shrink-mds: use mds_to_kill_hostname instead 2020-12-04 05:21:02 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:27:30 UTC

Description Shrivaibavi Raghaventhiran 2020-08-18 18:37:07 UTC
Description of problem:
-------------------------
Unable to remove MDS by using shrink-mds.yml using fqdn,

Tried 3 different commands but none seems to work
# ansible-playbook -e ireallymeanit=yes infrastructure-playbooks/shrink-mds.yml -i hosts -vv -e mds_to_kill=dell-r640-013

# ansible-playbook -e ireallymeanit=yes infrastructure-playbooks/shrink-mds.yml -i hosts -vv -e mds_to_kill=dell-r640-013.dsal.lab.eng.rdu2.redhat.com

ansible-playbook  -e ireallymeanit=yes infrastructure-playbooks/shrink-mds.yml -i hosts -vv -e mds_to_kill=ceph-mds-dell-r640-013

Errors pasted http://pastebin.test.redhat.com/894482


Version-Release number of selected component (if applicable):
[root@dell-r640-012 /]# ceph version
ceph version 14.2.8-89.el8cp (9ab115d618c72e7d9227441ec25ceb1487c76fb8) nautilus (stable)
[root@dell-r640-012 /]# ceph versions
{
    "mon": {
        "ceph version 14.2.8-89.el8cp (9ab115d618c72e7d9227441ec25ceb1487c76fb8) nautilus (stable)": 2
    },
    "mgr": {
        "ceph version 14.2.8-89.el8cp (9ab115d618c72e7d9227441ec25ceb1487c76fb8) nautilus (stable)": 1
    },
    "osd": {
        "ceph version 14.2.8-89.el8cp (9ab115d618c72e7d9227441ec25ceb1487c76fb8) nautilus (stable)": 2
    },
    "mds": {
        "ceph version 14.2.8-89.el8cp (9ab115d618c72e7d9227441ec25ceb1487c76fb8) nautilus (stable)": 1
    },
    "overall": {
        "ceph version 14.2.8-89.el8cp (9ab115d618c72e7d9227441ec25ceb1487c76fb8) nautilus (stable)": 6
    }
}

How reproducible:
Many times


Steps to Reproduce:
1. Try to shrink-mds using shrink-mds.yml playbook
2. the hosts file contained fqdn
3. Check if the mds is removed

Actual results:
The MDS did not get removed


Expected results:
The mds should have been removed from the cluster

Workaround:
------------
Instead of not having FQDNs as the hostname, try having something like this in hosts file 

#dell-r640-013.dsal.lab.eng.rdu2.redhat.com monitor_interface=em1
dell-r640-013 monitor_interface=em1

and run

ansible-playbook  -e ireallymeanit=yes infrastructure-playbooks/shrink-mds.yml -i hosts -vv -e mds_to_kill=dell-r640-013

The removal of MDS should be a success

Additional info:

Comment 10 errata-xmlrpc 2020-09-30 17:26:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.