Bug 1920979 - cephadm allows duplicate labels attachment to cluster node
Summary: cephadm allows duplicate labels attachment to cluster node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 5.0
Assignee: Adam King
QA Contact: Sunil Kumar Nagaraju
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-27 10:37 UTC by Sunil Kumar Nagaraju
Modified: 2021-08-30 08:28 UTC (History)
2 users (show)

Fixed In Version: ceph-16.1.0-997.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:27:54 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1224 0 None None None 2021-08-30 00:17:46 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:28:10 UTC

Comment 1 Adam King 2021-03-04 19:20:05 UTC
I was unable to reproduce this using the add label command on latest master code:

[ceph: root@vm-00 /]# ceph -v  
ceph version 17.0.0-1275-g5e197a21 (5e197a21e61b6d7e4f41a330cd63bc787164937d) quincy (dev)
[ceph: root@vm-00 /]# ceph orch host add vm-02 --labels 'osd'
Added host 'vm-02'
[ceph: root@vm-00 /]# ceph orch host ls
HOST   ADDR   LABELS  STATUS  
vm-00  vm-00                  
vm-01  vm-01                  
vm-02  vm-02  osd             
[ceph: root@vm-00 /]# ceph orch host label add vm-02 'osd'
Added label osd to host vm-02
[ceph: root@vm-00 /]# ceph orch host ls
HOST   ADDR   LABELS  STATUS  
vm-00  vm-00                  
vm-01  vm-01                  
vm-02  vm-02  osd             


To make sure of something, what happens when you try to remove a duplicate label like "osd"? Do they all disappear or only one? I noticed that a command like

ceph orch host add vm-02 --labels 'mon osd osd mgr osd mon'

would cause this

[ceph: root@vm-00 /]# ceph orch host ls
HOST   ADDR   LABELS                   STATUS  
vm-00  vm-00                                   
vm-01  vm-01                                   
vm-02  vm-02  mon osd osd mgr osd mon        


which LOOKS like duplicate labels but the reality is "mon osd osd mgr osd mon" is actually one label (it allows whitespace as part of the label). You can tell because removing it doesn't work

[ceph: root@vm-00 /]# ceph orch host label rm vm-02 osd
Removed label osd from host vm-02
[ceph: root@vm-00 /]# ceph orch host ls
HOST   ADDR   LABELS                   STATUS  
vm-00  vm-00                                   
vm-01  vm-01                                   
vm-02  vm-02  mon osd osd mgr osd mon      



It's possible you have the same situation going on with your nodes (and maybe the real issue is that labels should be clearly separated when printed so you can tell when one label ends and the next begins or maybe we just shouldn't allow whitespace in labels).

If you think you are definitely not having that issue and there really are duplicate labels could you please provide the exact set of commands you used to get the duplicate labels on a host as I've been unable to make it happen myself.

Comment 3 Adam King 2021-03-05 15:19:41 UTC
@Sunil Thank you for your detailed description! I was able to reproduce the issue with your example.

upstream tracker: https://tracker.ceph.com/issues/49626
upstream PR: https://github.com/ceph/ceph/pull/39857

Comment 4 Adam King 2021-03-12 15:00:15 UTC
Upstream PR was merged. Just waiting for backport to Pacific and for the change to eventually reach the downstream image.

Comment 5 Ken Dreyer (Red Hat) 2021-03-19 18:14:57 UTC
Sage backported PR 39857 to pacific in https://github.com/ceph/ceph/pull/40135. This will be in the next weekly rebase I build downstream (March 22nd).

Comment 10 errata-xmlrpc 2021-08-30 08:27:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.