Bug 1722394 - [Doc RFE]: Document use of custom ceph cluster names for cephmetrics-ansible
Summary: [Doc RFE]: Document use of custom ceph cluster names for cephmetrics-ansible
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 3.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 4.0
Assignee: John Brier
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-20 09:02 UTC by Matthias Muench
Modified: 2022-02-21 18:29 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-26 15:04:51 UTC
Embargoed:


Attachments (Terms of Use)

Description Matthias Muench 2019-06-20 09:02:06 UTC
Description of problem:
Documentation doesn't reflect setting of cluster name as necessary step for installation of dashboard in conjunction with custom cluster names and the installation fails due to missing /etc/ceph/ceph.conf.

Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.2-1.el7cp.x86_64
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/container_guide/index#installing-the-red-hat-ceph-storage-dashboard-container

How reproducible:
Always when using custom cluster name.


Steps to Reproduce:
1. install Ceph cluster using custom cluster name (here: ceph32c)
2. add dashboard section to /etc/ansible/hosts
3. cd /usr/share/cephmetrics-ansible; ansible-playbook -v playbook.yml

Actual results:
TASK [ceph-mgr : Enable mgr prometheus module] ********************************************************************
fatal: [ceph32c-osd1]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "ceph-mgr-ceph32c-osd1", "ceph", "--cluster", "ceph", "mgr", "module", "enable", "prometheus"], "delta": "0:00:00.205653", "end": "2019-06-18 05:12:17.193327", "msg": "non-zero return code", "rc": 1, "start": "2019-06-18 05:12:16.987674", "stderr": "2019-06-18 05:12:17.178216 7ff1e2768700 -1 Errors while parsing config file!\n2019-06-18 05:12:17.178239 7ff1e2768700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory\n2019-06-18 05:12:17.178240 7ff1e2768700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory\n2019-06-18 05:12:17.178241 7ff1e2768700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory\nError initializing cluster client: ObjectNotFound('error calling conf_read_file',)", "stderr_lines": ["2019-06-18 05:12:17.178216 7ff1e2768700 -1 Errors while parsing config file!", "2019-06-18 05:12:17.178239 7ff1e2768700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory", "2019-06-18 05:12:17.178240 7ff1e2768700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory", "2019-06-18 05:12:17.178241 7ff1e2768700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory", "Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)"], "stdout": "", "stdout_lines": []}
 [WARNING]: Could not create retry file '/usr/share/cephmetrics-ansible/playbook.retry'.         [Errno 13]
Permission denied: u'/usr/share/cephmetrics-ansible/playbook.retry'


PLAY RECAP ********************************************************************************************************
ceph32c-metrics            : ok=1    changed=0    unreachable=0    failed=0   
ceph32c-osd1               : ok=11   changed=1    unreachable=0    failed=1   
ceph32c-osd2               : ok=1    changed=0    unreachable=0    failed=0   
ceph32c-osd3               : ok=1    changed=0    unreachable=0    failed=0   
ceph32c-osd4               : ok=1    changed=0    unreachable=0    failed=0   
ceph32c-osd5               : ok=1    changed=0    unreachable=0    failed=0  


Expected results:
TASK [ceph-mgr : Enable mgr prometheus module] ********************************************************************
changed: [ceph32c-osd1] => {"changed": true, "cmd": ["docker", "exec", "ceph-mgr-ceph32c-osd1", "ceph", "--cluster", "ceph32c", "mgr", "module", "enable", "prometheus"], "delta": "0:00:00.644984", "end": "2019-06-18 05:40:45.196568", "rc": 0, "start": "2019-06-18 05:40:44.551584", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

Additional info:
The needed change is to add configuration of cluster name via group_vars/all.yml configuration:
[ansible@ceph32c-metrics cephmetrics-ansible]$ sudo cp group_vars/all.yml.sample group_vars/all.yml
[ansible@ceph32c-metrics cephmetrics-ansible]$ sudo vi all.yml
[ansible@ceph32c-metrics cephmetrics-ansible]$ sudo vi group_vars/all.yml
[ansible@ceph32c-metrics cephmetrics-ansible]$ cat group_vars/all.yml
dummy:

cluster_name: ceph32c

# containerized: true

# Set the backend options, mgr+prometheus or cephmetrics+graphite
#backend:
#  metrics: mgr  # mgr, cephmetrics
#  storage: prometheus  # prometheus, graphite

# Turn on/off devel_mode
#devel_mode: true

# Set grafana admin user and password
# You need to change these in the web UI on an already deployed machine, first
# New deployments work fine
#grafana:
#  admin_user: admin
#  admin_password: admin

Comment 4 John Brier 2019-06-26 15:04:51 UTC
I have updated the Release Notes so they explain custom cluster names are not actually supported:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3.2/html/release_notes/bug-fixes#the_literal_ceph_volume_literal_utility_2


Note You need to log in before you can comment on or make changes to this bug.