Bug 1490957 - [RADOS] - osd-check log file is being created in /var/run/ceph
Summary: [RADOS] - osd-check log file is being created in /var/run/ceph
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Disk
Version: 3.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 3.2
Assignee: Kefu Chai
QA Contact: Vasishta
Depends On:
TreeView+ depends on / blocked
Reported: 2017-09-12 15:07 UTC by Vasishta
Modified: 2019-01-03 19:01 UTC (History)
15 users (show)

Fixed In Version: RHEL: ceph-12.2.8-3.el7cp Ubuntu: ceph_12.2.8-3redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2019-01-03 19:01:20 UTC
Target Upstream Version:

Attachments (Terms of Use)
File contains all.yml and ansible-playbook log (682.74 KB, text/plain)
2017-09-12 17:09 UTC, Vasishta
no flags Details

System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 24041 0 None None None 2018-05-08 04:32:59 UTC
Red Hat Product Errata RHBA-2019:0020 0 None None None 2019-01-03 19:01:49 UTC

Description Vasishta 2017-09-12 15:07:01 UTC
Description of problem:
osd-check.log is being created in /var/run/ceph of all osd nodes

Version-Release number of selected component (if applicable):
ceph version 12.2.0-2.el7cp (3137b4f525c5dcc2a34fef5b0f6bcf4477312db9) luminous (rc)

How reproducible:
Always (2)

Steps to Reproduce:
1. Configure ceph-ansible to get a cluster up, run playbook
2. Observe /var/run/ceph on all osd nodes

Actual results:
<cluster_name>-osd-check.log is being created in /var/run/ceph directory all osd nodes.

Expected results:
log files shouldn't be there in /var/run/ceph directory

Additional info:
1) I had tried dmcrypt+collocated journal OSDs 
2) # ls -l /var/run/ceph
total 4
srwxr-xr-x. 1 ceph ceph    0 Sep 12 14:47 12_luminous-osd.1.asok
srwxr-xr-x. 1 ceph ceph    0 Sep 12 14:47 12_luminous-osd.3.asok
srwxr-xr-x. 1 ceph ceph    0 Sep 12 14:48 12_luminous-osd.6.asok
-rw-r--r--. 1 ceph ceph 2124 Sep 12 14:47 12_luminous-osd-check.log

3) # cat /var/run/ceph/12_luminous-osd-check.log 
2017-09-12 14:46:33.206605 7f9d32989d00  0 set uid:gid to 167:167 (ceph:ceph)
2017-09-12 14:46:33.206623 7f9d32989d00  0 ceph version 12.2.0-2.el7cp (3137b4f525c5dcc2a34fef5b0f6bcf4477312db9) luminous (rc), process (unknown), pid 12479

Comment 2 seb 2017-09-12 15:55:29 UTC
Can you share your ceph.conf?
This is probably a client config that does put the log in /var/run/ceph

Comment 3 Vasishta 2017-09-12 17:09:17 UTC
Created attachment 1324979 [details]
File contains all.yml and ansible-playbook log

$ cat /etc/ceph/12_luminous.conf 
# Please do not change this file directly since it is managed by Ansible and will be overwritten

fsid = 34652d30-4cf9-432c-b7df-da63395422eb
max open files = 131072

mon initial members = magna097
mon host = <appropriate_mon_address>

public network = <appropriate_osd_address>
cluster network = <appropriate_osd_address>


Comment 5 seb 2017-09-12 19:21:45 UTC
Hum I don't see how it's a ceph-ansible bug.
There is no client section so I think you're getting the default.

Looks like a ceph-disk thing to me: https://github.com/ceph/ceph/blob/master/src/ceph-disk/ceph_disk/main.py#L1532-L1554

Comment 6 seb 2017-09-15 13:17:12 UTC
As I said, I think the issue is in ceph-disk, if you know how we can fix this in ceph-ansible let me know. For now, this is not a ceph-ansible bug, so either change the component or re-open here if you think it's necessary.


Comment 7 Kefu Chai 2017-10-18 10:04:54 UTC
upstream PR: https://github.com/ceph/ceph/pull/18375

Comment 8 Kefu Chai 2017-10-18 10:58:31 UTC
upstream PR merged

Comment 11 Ken Dreyer (Red Hat) 2018-05-23 16:44:14 UTC
Will be upstream in v12.2.6. Priority: Low -> re-targeting to RHCS 3.2.

Comment 13 Vasishta 2018-11-19 12:07:28 UTC
Working fine with 12.2.8-36 .
Moving to VERIFIED state.

Vasishta Shastry
QE, Ceph

Comment 15 errata-xmlrpc 2019-01-03 19:01:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.