I was unable to reproduce this using the add label command on latest master code: [ceph: root@vm-00 /]# ceph -v ceph version 17.0.0-1275-g5e197a21 (5e197a21e61b6d7e4f41a330cd63bc787164937d) quincy (dev) [ceph: root@vm-00 /]# ceph orch host add vm-02 --labels 'osd' Added host 'vm-02' [ceph: root@vm-00 /]# ceph orch host ls HOST ADDR LABELS STATUS vm-00 vm-00 vm-01 vm-01 vm-02 vm-02 osd [ceph: root@vm-00 /]# ceph orch host label add vm-02 'osd' Added label osd to host vm-02 [ceph: root@vm-00 /]# ceph orch host ls HOST ADDR LABELS STATUS vm-00 vm-00 vm-01 vm-01 vm-02 vm-02 osd To make sure of something, what happens when you try to remove a duplicate label like "osd"? Do they all disappear or only one? I noticed that a command like ceph orch host add vm-02 --labels 'mon osd osd mgr osd mon' would cause this [ceph: root@vm-00 /]# ceph orch host ls HOST ADDR LABELS STATUS vm-00 vm-00 vm-01 vm-01 vm-02 vm-02 mon osd osd mgr osd mon which LOOKS like duplicate labels but the reality is "mon osd osd mgr osd mon" is actually one label (it allows whitespace as part of the label). You can tell because removing it doesn't work [ceph: root@vm-00 /]# ceph orch host label rm vm-02 osd Removed label osd from host vm-02 [ceph: root@vm-00 /]# ceph orch host ls HOST ADDR LABELS STATUS vm-00 vm-00 vm-01 vm-01 vm-02 vm-02 mon osd osd mgr osd mon It's possible you have the same situation going on with your nodes (and maybe the real issue is that labels should be clearly separated when printed so you can tell when one label ends and the next begins or maybe we just shouldn't allow whitespace in labels). If you think you are definitely not having that issue and there really are duplicate labels could you please provide the exact set of commands you used to get the duplicate labels on a host as I've been unable to make it happen myself.
@Sunil Thank you for your detailed description! I was able to reproduce the issue with your example. upstream tracker: https://tracker.ceph.com/issues/49626 upstream PR: https://github.com/ceph/ceph/pull/39857
Upstream PR was merged. Just waiting for backport to Pacific and for the change to eventually reach the downstream image.
Sage backported PR 39857 to pacific in https://github.com/ceph/ceph/pull/40135. This will be in the next weekly rebase I build downstream (March 22nd).
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294