.The TCP port for the Ceph exporter is opened during the Ansible deployment of the Ceph Dashboard
Previously, the TCP port for the Ceph exporter was not opened by the Ansible deployment scripts on all the nodes in the storage cluster. Opening TCP port 9283 had to be done manually on all nodes for the metrics to be available to the Ceph Dashboard. With this release, the TCP port is now being opened by the Ansible deployment scripts for Ceph Dashboard.
Description of problem:
* Ceph metrics dashboard receiving no data after storage node reboot
Version-Release number of selected component (if applicable): RHCS version 3.1
How reproducible: always
Steps to Reproduce:
1. Install ceph dashboard
2. Reboot all the storage nodes one by one
3. Observe dashboard for more than one day
Actual results:
* After rebooting all storage nodes one by one dashboard showing no value.
Expected results:
* After rebooting all storage nodes one by one, dashboard should how the values.
Additional info:
[root@ssd1 cephmetrics-ansible]# grep -R 9283 *
roles/ceph-mgr/tasks/configure_firewall.yml: - 9283/tcp
roles/ceph-prometheus/templates/prometheus.yml: - targets: ['{{ host }}:9283']
[root@ssd1 cephmetrics-ansible]# grep -R 9100 *
roles/ceph-node-exporter/tasks/configure_firewall.yml: - 9100/tcp
roles/ceph-node-exporter/tests/test_node_exporter.py: socket_spec = "tcp://0.0.0.0:9100"
roles/ceph-prometheus/templates/prometheus.yml: - targets: ['{{ host }}:9100']
roles/ceph-prometheus/templates/prometheus.yml: - targets: ['{{ host }}:9100']
-------
**Need to add 9283/tcp to /usr/share/cephmetrics-ansible/roles/ceph-node-exporter/tasks/configure_firewall.yml**
# vi /usr/share/cephmetrics-ansible/roles/ceph-node-exporter/tasks/configure_firewall.yml
- name: Open ports for node_exporter
firewalld:
port: "{{ item }}"
zone: "{{ firewalld_zone }}"
state: enabled
immediate: true
permanent: true
with_items:
- 9100/tcp
when: "'enabled' in firewalld_status.stdout"
In configure_firewall.yml file only 9100 is added. Need to add 9283/tcp port as well.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2019:2538
Description of problem: * Ceph metrics dashboard receiving no data after storage node reboot Version-Release number of selected component (if applicable): RHCS version 3.1 How reproducible: always Steps to Reproduce: 1. Install ceph dashboard 2. Reboot all the storage nodes one by one 3. Observe dashboard for more than one day Actual results: * After rebooting all storage nodes one by one dashboard showing no value. Expected results: * After rebooting all storage nodes one by one, dashboard should how the values. Additional info: [root@ssd1 cephmetrics-ansible]# grep -R 9283 * roles/ceph-mgr/tasks/configure_firewall.yml: - 9283/tcp roles/ceph-prometheus/templates/prometheus.yml: - targets: ['{{ host }}:9283'] [root@ssd1 cephmetrics-ansible]# grep -R 9100 * roles/ceph-node-exporter/tasks/configure_firewall.yml: - 9100/tcp roles/ceph-node-exporter/tests/test_node_exporter.py: socket_spec = "tcp://0.0.0.0:9100" roles/ceph-prometheus/templates/prometheus.yml: - targets: ['{{ host }}:9100'] roles/ceph-prometheus/templates/prometheus.yml: - targets: ['{{ host }}:9100'] ------- **Need to add 9283/tcp to /usr/share/cephmetrics-ansible/roles/ceph-node-exporter/tasks/configure_firewall.yml** # vi /usr/share/cephmetrics-ansible/roles/ceph-node-exporter/tasks/configure_firewall.yml - name: Open ports for node_exporter firewalld: port: "{{ item }}" zone: "{{ firewalld_zone }}" state: enabled immediate: true permanent: true with_items: - 9100/tcp when: "'enabled' in firewalld_status.stdout" In configure_firewall.yml file only 9100 is added. Need to add 9283/tcp port as well.