Bug 1295806 - too many warnings coming from ceph-deploy
Summary: too many warnings coming from ceph-deploy
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Installer
Version: 1.3.1
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: 1.3.4
Assignee: Christina Meno
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-05 14:03 UTC by Ben England
Modified: 2018-02-20 20:50 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-20 20:50:29 UTC
Embargoed:


Attachments (Terms of Use)

Description Ben England 2016-01-05 14:03:19 UTC
Description of problem:

ceph-deploy outputs messages with severity WARNING when they are really informational, this makes it hard to spot when a real warning or error is occurring, suggest that all messages be reviewed to assign correct log level.

Version-Release number of selected component (if applicable):

RHCS 1.3.1

How reproducible:

every time

Steps to Reproduce:
1. ceph-deploy disk activate
2.
3.

Actual results:

see bottom

Expected results:

most of messages should not be warnings, and only warnings or errors should be output.  If a user requests INFO messages with --loglevel=info or something like that, then fine.  People don't live to read logs, they assume your software works and they want to get on with their job.

Additional info:

example:

[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdc1
[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[hp60ds4][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.ehTHYa with options noatime,inode64
[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.ehTHYa
[hp60ds4][WARNIN] DEBUG:ceph-disk:Cluster uuid is 05ec8c06-7c7d-473a-b49a-fb8bdd631035
[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[hp60ds4][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[hp60ds4][WARNIN] DEBUG:ceph-disk:OSD uuid is b3512099-b2ae-47b5-a3da-ac80777c29a2
[hp60ds4][WARNIN] DEBUG:ceph-disk:OSD id is 7
[hp60ds4][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[hp60ds4][WARNIN] DEBUG:ceph-disk:ceph osd.7 data dir is ready at /var/lib/ceph/tmp/mnt.ehTHYa
[hp60ds4][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/osd/ceph-7
[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.ehTHYa
[hp60ds4][WARNIN] DEBUG:ceph-disk:Starting ceph osd.7...
[hp60ds4][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.7

Comment 2 Alfredo Deza 2016-01-05 17:13:32 UTC
I don't see this as a bug. ceph-deploy reads stderr output from remote nodes and sets them as WARNING and stdout as DEBUG.

There is really no good way to determine what would be true error output on remote execution.

We could however lower the level, but then again, true warning messages on remote nodes would be misread as INFO and not as a warning.

So far it has been very helpful to have high verbosity in ceph-deploy as it is easier for development to understand what and where things are going wrong. 

I can only suggest to raise the log level to ERROR if the current level is not sufficient.

Comment 3 Ben England 2016-01-26 14:26:50 UTC
I don't think stderr output can be interpreted as level WARNING.  The only way to know whether subprocess messages are reporting errors is to look at the process exit status.  If it is zero, then it wasn't an error.  If it was non-zero, then at least some of the output on either stderr or stdout was reporting an error.  You can redirect the output from a subprocess to a file and decide what to do with it later.  Just my opinion.

Comment 4 Alfredo Deza 2016-01-27 15:23:52 UTC
Since ceph-deploy does "real time" logging, there is no way we can know about the error condition. I don't have a good solution here although I understand how cumbersome it is to look at the (probably long) output generated.


Note You need to log in before you can comment on or make changes to this bug.