Bug 1490957

Summary: [RADOS] - osd-check log file is being created in /var/run/ceph
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: Ceph-DiskAssignee: Kefu Chai <kchai>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: low Docs Contact:
Priority: low    
Version: 3.0CC: adeza, anharris, aschoen, ceph-eng-bugs, dzafman, flucifre, gmeno, hnallurv, kchai, kdreyer, nthomas, sankarshan, seb, tserlin, vashastr
Target Milestone: rcKeywords: Reopened
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.8-3.el7cp Ubuntu: ceph_12.2.8-3redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-03 19:01:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
File contains all.yml and ansible-playbook log none

Description Vasishta 2017-09-12 15:07:01 UTC
Description of problem:
osd-check.log is being created in /var/run/ceph of all osd nodes

Version-Release number of selected component (if applicable):
ceph version 12.2.0-2.el7cp (3137b4f525c5dcc2a34fef5b0f6bcf4477312db9) luminous (rc)


How reproducible:
Always (2)

Steps to Reproduce:
1. Configure ceph-ansible to get a cluster up, run playbook
2. Observe /var/run/ceph on all osd nodes


Actual results:
<cluster_name>-osd-check.log is being created in /var/run/ceph directory all osd nodes.

Expected results:
log files shouldn't be there in /var/run/ceph directory

Additional info:
1) I had tried dmcrypt+collocated journal OSDs 
2) # ls -l /var/run/ceph
total 4
srwxr-xr-x. 1 ceph ceph    0 Sep 12 14:47 12_luminous-osd.1.asok
srwxr-xr-x. 1 ceph ceph    0 Sep 12 14:47 12_luminous-osd.3.asok
srwxr-xr-x. 1 ceph ceph    0 Sep 12 14:48 12_luminous-osd.6.asok
-rw-r--r--. 1 ceph ceph 2124 Sep 12 14:47 12_luminous-osd-check.log

3) # cat /var/run/ceph/12_luminous-osd-check.log 
2017-09-12 14:46:33.206605 7f9d32989d00  0 set uid:gid to 167:167 (ceph:ceph)
2017-09-12 14:46:33.206623 7f9d32989d00  0 ceph version 12.2.0-2.el7cp (3137b4f525c5dcc2a34fef5b0f6bcf4477312db9) luminous (rc), process (unknown), pid 12479
-----------

Comment 2 seb 2017-09-12 15:55:29 UTC
Can you share your ceph.conf?
This is probably a client config that does put the log in /var/run/ceph

Comment 3 Vasishta 2017-09-12 17:09:17 UTC
Created attachment 1324979 [details]
File contains all.yml and ansible-playbook log

$ cat /etc/ceph/12_luminous.conf 
# Please do not change this file directly since it is managed by Ansible and will be overwritten

[global]
fsid = 34652d30-4cf9-432c-b7df-da63395422eb
max open files = 131072



mon initial members = magna097
mon host = <appropriate_mon_address>

public network = <appropriate_osd_address>
cluster network = <appropriate_osd_address>

Regards,
Vasishta

Comment 5 seb 2017-09-12 19:21:45 UTC
Hum I don't see how it's a ceph-ansible bug.
There is no client section so I think you're getting the default.

Looks like a ceph-disk thing to me: https://github.com/ceph/ceph/blob/master/src/ceph-disk/ceph_disk/main.py#L1532-L1554

Comment 6 seb 2017-09-15 13:17:12 UTC
As I said, I think the issue is in ceph-disk, if you know how we can fix this in ceph-ansible let me know. For now, this is not a ceph-ansible bug, so either change the component or re-open here if you think it's necessary.

Thanks!

Comment 7 Kefu Chai 2017-10-18 10:04:54 UTC
upstream PR: https://github.com/ceph/ceph/pull/18375

Comment 8 Kefu Chai 2017-10-18 10:58:31 UTC
upstream PR merged

Comment 11 Ken Dreyer (Red Hat) 2018-05-23 16:44:14 UTC
Will be upstream in v12.2.6. Priority: Low -> re-targeting to RHCS 3.2.

Comment 13 Vasishta 2018-11-19 12:07:28 UTC
Working fine with 12.2.8-36 .
Moving to VERIFIED state.

Regards,
Vasishta Shastry
QE, Ceph

Comment 15 errata-xmlrpc 2019-01-03 19:01:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020