Bug 2103707 - [RFE] `cephadm_bootstrap` registers successfull output to stderr instead of stdout
Summary: [RFE] `cephadm_bootstrap` registers successfull output to stderr instead of s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 6.0
Assignee: Adam King
QA Contact: Vaibhav Mahajan
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2126050
TreeView+ depends on / blocked
 
Reported: 2022-07-04 15:09 UTC by Vaibhav Mahajan
Modified: 2023-03-20 18:57 UTC (History)
7 users (show)

Fixed In Version: ceph-17.2.3-1.el9cp
Doc Type: Bug Fix
Doc Text:
.`Cephadm` logging configurations are updated. Previously, `cephadm` scripts were logging all output to `stderr`. As a result, cephadm bootstrap logs signifying successful deployment were also being sent to `stderr` instead of `stdout`. With this fix, `cephadm` script now has different logging configurations for certain commands and the one used for bootstrap now only logs errors to stderr.
Clone Of:
Environment:
Last Closed: 2023-03-20 18:57:08 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4669 0 None None None 2022-07-04 15:15:48 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:57:48 UTC

Description Vaibhav Mahajan 2022-07-04 15:09:04 UTC
Description of problem:
`cephadm_bootstrap` module register details of cephadm cluster to `stderr` instead of `stdout` on successfull deployment

How reproducible: Always

Steps to Reproduce:

[root@rhceph5x-5node-0 cephadm-ansible]# rpm -qa | grep ansible
cephadm-ansible-1.7.0-1.el8cp.noarch
ansible-2.9.27-1.el8ae.noarch

Use below playbook -

```
- name: Test 'cephadm_bootstrap' module
  hosts: localhost
  gather_facts: false
  become: true
  any_errors_fatal: true
  tasks:
    - name: Bootstrap with monitoring option set to `true`
      cephadm_bootstrap:
        mon_ip: 10.70.46.96 
        dashboard: false
        firewalld: false
        monitoring: true
      register: mon_node_details

    - name: DEBUG. mon_node_details
      debug: 
        msg: "{{ mon_node_details }}"

    - name: DEBUG. mon_node_details.stderr
      debug: 
        msg: "{{ mon_node_details.stderr }}"
```

Actual results:

```
**************************************************
2022-07-04 20:00:36,570 p=2409 u=root n=ansible | 1 plays in t01-monitoring.yaml
2022-07-04 20:00:36,572 p=2409 u=root n=ansible | PLAY [Test-01 'cephadm_bootstrap' module] ****************************************************************************************************************************************************
2022-07-04 20:00:36,577 p=2409 u=root n=ansible | META: ran handlers
2022-07-04 20:00:36,580 p=2409 u=root n=ansible | TASK [Bootstrap with monitoring option set to `true`] ****************************************************************************************************************************************
2022-07-04 20:00:36,580 p=2409 u=root n=ansible | Monday 04 July 2022  20:00:36 +0530 (0:00:00.021)       0:00:00.021 *********** 
2022-07-04 20:00:36,708 p=2415 u=root n=ansible | Using module file /usr/share/cephadm-ansible/library/cephadm_bootstrap.py
2022-07-04 20:00:36,709 p=2415 u=root n=ansible | Pipelining is enabled.
2022-07-04 20:00:36,709 p=2415 u=root n=ansible | <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
2022-07-04 20:00:36,709 p=2415 u=root n=ansible | <127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python && sleep 0'
2022-07-04 20:04:50,835 p=2409 u=root n=ansible | changed: [localhost] => changed=true 
  cmd:
  - cephadm
  - bootstrap
  - --mon-ip
  - 10.70.46.140
  - --skip-dashboard
  - --skip-firewalld
  delta: '0:04:13.942473'
  diff:
    after: ''
    before: ''
  end: '2022-07-04 20:04:50.796459'
  invocation:
    module_args:
      allow_fqdn_hostname: false
      allow_overwrite: false
      cluster_network: null
      dashboard: false
      dashboard_password: null
      dashboard_user: null
      docker: false
      firewalld: false
      fsid: null
      image: null
      mon_ip: 10.70.46.140
      monitoring: true
      pull: true
      registry_json: null
      registry_password: null
      registry_url: null
      registry_username: null
      ssh_config: null
      ssh_user: null
  rc: 0
  start: '2022-07-04 20:00:36.853986'
  stderr: |-
    Verifying podman|docker is present...
    Verifying lvm2 is present...
    Verifying time synchronization is in place...
    Unit chronyd.service is enabled and running
    Repeating the final host check...
    podman (/usr/bin/podman) version 4.0.2 is present
    systemctl is present
    lvcreate is present
    Unit chronyd.service is enabled and running
    Host looks OK
    Cluster fsid: df4e03bc-fba5-11ec-ac83-0050568aa418
    Verifying IP 10.70.46.140 port 3300 ...
    Verifying IP 10.70.46.140 port 6789 ...
    Mon IP `10.70.46.140` is in CIDR network `10.70.44.0/22`
    Mon IP `10.70.46.140` is in CIDR network `10.70.44.0/22`
    Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
    Pulling container image registry.redhat.io/rhceph/rhceph-5-rhel8:latest...
    Ceph version: ceph version 16.2.7-126.el8cp (fe0af61d104d48cb9d116cde6e593b5fc8c197e4) pacific (stable)
    Extracting ceph user uid/gid from container image...
    Creating initial keys...
    Creating initial monmap...
    Creating mon...
    Waiting for mon to start...
    Waiting for mon...
    mon is available
    Assimilating anything we can from ceph.conf...
    Generating new minimal ceph.conf...
    Restarting the monitor...
    Setting mon public_network to 10.70.44.0/22
    Wrote config to /etc/ceph/ceph.conf
    Wrote keyring to /etc/ceph/ceph.client.admin.keyring
    Creating mgr...
    Verifying port 9283 ...
    firewalld ready
    Enabling firewalld port 9283/tcp in current zone...
    Waiting for mgr to start...
    Waiting for mgr...
    mgr not available, waiting (1/15)...
    mgr not available, waiting (2/15)...
    mgr is available
    Enabling cephadm module...
    Waiting for the mgr to restart...
    Waiting for mgr epoch 5...
    mgr epoch 5 is available
    Setting orchestrator backend to cephadm...
    Generating ssh key...
    Wrote public SSH key to /etc/ceph/ceph.pub
    Adding key to root@localhost authorized_keys...
    Adding host rhceph5x-5node-0...
    Deploying mon service with default placement...
    Deploying mgr service with default placement...
    Deploying crash service with default placement...
    Deploying prometheus service with default placement...
    Deploying grafana service with default placement...
    Deploying node-exporter service with default placement...
    Deploying alertmanager service with default placement...
    Enabling client.admin keyring and conf on hosts with "admin" label
    Enabling autotune for osd_memory_target
    You can access the Ceph CLI as following in case of multi-cluster or non-default config:
  
            sudo /usr/sbin/cephadm shell --fsid df4e03bc-fba5-11ec-ac83-0050568aa418 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
  
    Or, if you are only running a single cluster on this host:
  
            sudo /usr/sbin/cephadm shell
  
    Please consider enabling telemetry to help improve Ceph:
  
            ceph telemetry on
  
    For more information see:
  
            https://docs.ceph.com/en/pacific/mgr/telemetry/
  
    Bootstrap complete.
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
2022-07-04 20:04:50,853 p=2409 u=root n=ansible | TASK [DEBUG. mon_node_details] ***************************************************************************************************************************************************************
2022-07-04 20:04:50,853 p=2409 u=root n=ansible | Monday 04 July 2022  20:04:50 +0530 (0:04:14.272)       0:04:14.294 *********** 
2022-07-04 20:04:51,032 p=2409 u=root n=ansible | ok: [localhost] => 
  msg: ''
2022-07-04 20:04:51,033 p=2409 u=root n=ansible | META: ran handlers
2022-07-04 20:04:51,033 p=2409 u=root n=ansible | META: ran handlers
2022-07-04 20:04:51,035 p=2409 u=root n=ansible | PLAY RECAP ***********************************************************************************************************************************************************************************

```

`mon_node_details.stderr` contains data rather than `mon_node_details.stdout` even tasks complete successfully

Expected results:
`mon_node_details.stdout` contains details

Comment 1 RHEL Program Management 2022-07-04 15:09:12 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 19 errata-xmlrpc 2023-03-20 18:57:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.