Bug 1346188 - Cannot import Ceph cluster with traditional monitor names (e.g. mon.a)
Summary: Cannot import Ceph cluster with traditional monitor names (e.g. mon.a)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: Ceph
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Shubhendu Tripathi
QA Contact: Martin Kudlej
URL:
Whiteboard:
Depends On: 1346207 1362510
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-14 08:44 UTC by Daniel Horák
Modified: 2018-11-19 05:30 UTC (History)
1 user (show)

Fixed In Version: rhscon-core-0.0.34-1.el7scon.x86_64 rhscon-ceph-0.0.33-1.el7scon.x86_64 rhscon-ui-0.0.47-1.el7scon.noarch
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:30:59 UTC
Embargoed:


Attachments (Terms of Use)
Import Cluster - summary page with error (deleted)
2016-06-14 08:44 UTC, Daniel Horák
no flags Details

Description Daniel Horák 2016-06-14 08:44:16 UTC
Description of problem:
  USM expect that the monitor names (the part after "mon.") are the same as hostnames (without domain) - for example mon.jenkins-usm1-mon3, but accordingly to ceph documentation[1], monitor name could be nearly anything - by default just single letters (mon.a, mon.b, mon.c,..):
    ~~~~~~~~~~~~
    Traditionally, monitors have been named with single letters (a, b, c, ...), but you are free to define the id as you see fit.
    ~~~~~~~~~~~~

  Now it is not possible to import cluster with monitors traditionally named mon.a, mon.b,...

Version-Release number of selected component (if applicable):
  ceph-ansible-1.0.5-19.el7scon.noarch
  ceph-deploy-1.5.33-0.noarch
  ceph-installer-1.0.11-1.el7scon.noarch
  libcephfs1-0.94.7-0.el7.x86_64
  rhscon-ceph-0.0.20-1.el7scon.x86_64
  rhscon-core-0.0.21-1.el7scon.x86_64
  rhscon-ui-0.0.34-1.el7scon.noarch
  
  calamari-server-1.4.0-0.12.rc15.el7cp.x86_64
  ceph-10.2.1-11.el7cp.x86_64
  ceph-base-10.2.1-11.el7cp.x86_64
  ceph-common-10.2.1-11.el7cp.x86_64
  ceph-mds-10.2.1-11.el7cp.x86_64
  ceph-mon-10.2.1-11.el7cp.x86_64
  ceph-osd-10.2.1-11.el7cp.x86_64
  ceph-selinux-10.2.1-11.el7cp.x86_64
  libcephfs1-10.2.1-11.el7cp.x86_64
  python-cephfs-10.2.1-11.el7cp.x86_64
  rhscon-agent-0.0.9-1.el7scon.noarch

How reproducible:
  Probably 100%

Steps to Reproduce:
1. Prepare ceph cluster without using USM (with monitors named mon.a, mon.b and mon.c).
   NOTE: you have to setup calamary user admin with password admin because of bug 1345983.
2. Prepare USM server.
3. Try to import the ceph cluster created in first step to the USM server
  * on the first page of import cluster wizard ("Select Monitor Host") chose the monitor where is also calamary running.
  * click to continue

Actual results:
  It is not possible import the cluster, because it says:

    ~~~~~~~~~~~~
    One or more of the hosts in this cluster cannot be found. Please ensure that Ceph 2.0 or greater is installed on this host and that it is available on the network. Click Refresh to search for this host again.
    ~~~~~~~~~~~~

  and also

    ~~~~~~~~~~~~
    Hostname 	      Type 	Status
    a 	            mon 	Not Found
    rhosd2.smcsrv 	osd 	Available
    rhosd1.smcsrv 	osd 	Available 
    ~~~~~~~~~~~~

  (see the attachement)

Expected results:
  It should properly import cluster also with custom names for monitors.

Additional info:
  
  Related part of /etc/ceph/ceph.conf:

  ~~~~~~~~~~~~
  [mon.a]
  mon addr = 172.18.173.21:6789
  host = rhclient0
  mon data = /tmp/cbt/ceph/mon.$id
  ~~~~~~~~~~~~

  And for reference part of ceph configuration from cluster created by USM.

  ~~~~~~~~~~~~
  [mon]
  [mon.jenkins-usm1-mon1]
  host = jenkins-usm1-mon1
  # we need to check if monitor_interface is defined in the inventory per host or if it's set in a group_vars file
  mon addr = 172.16.176.29
  ~~~~~~~~~~~~

  [1] http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-mons/

Comment 2 Shubhendu Tripathi 2016-06-14 09:26:55 UTC
Its actually a bug with calamari because calamari api /api/v2/cluster/<fsid>/server returns incorrect FQDN values if mon names are custom defined while cluster creation from CLI.

For example if the mon names are mon.a, mon.b, mon.c ... the calamari api returns the FQDN values as "a", "b", "c"...

Raised BZ#1346207 and marked this as dependent on the same.

Comment 3 Martin Kudlej 2016-08-08 11:15:53 UTC
Tested with 
ceph-ansible-1.0.5-32.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.40-1.el7scon.x86_64
rhscon-core-0.0.41-1.el7scon.x86_64
rhscon-core-selinux-0.0.41-1.el7scon.noarch
rhscon-ui-0.0.52-1.el7scon.noarch
and it works.


Note You need to log in before you can comment on or make changes to this bug.