Description of problem: osd-check.log is being created in /var/run/ceph of all osd nodes Version-Release number of selected component (if applicable): ceph version 12.2.0-2.el7cp (3137b4f525c5dcc2a34fef5b0f6bcf4477312db9) luminous (rc) How reproducible: Always (2) Steps to Reproduce: 1. Configure ceph-ansible to get a cluster up, run playbook 2. Observe /var/run/ceph on all osd nodes Actual results: <cluster_name>-osd-check.log is being created in /var/run/ceph directory all osd nodes. Expected results: log files shouldn't be there in /var/run/ceph directory Additional info: 1) I had tried dmcrypt+collocated journal OSDs 2) # ls -l /var/run/ceph total 4 srwxr-xr-x. 1 ceph ceph 0 Sep 12 14:47 12_luminous-osd.1.asok srwxr-xr-x. 1 ceph ceph 0 Sep 12 14:47 12_luminous-osd.3.asok srwxr-xr-x. 1 ceph ceph 0 Sep 12 14:48 12_luminous-osd.6.asok -rw-r--r--. 1 ceph ceph 2124 Sep 12 14:47 12_luminous-osd-check.log 3) # cat /var/run/ceph/12_luminous-osd-check.log 2017-09-12 14:46:33.206605 7f9d32989d00 0 set uid:gid to 167:167 (ceph:ceph) 2017-09-12 14:46:33.206623 7f9d32989d00 0 ceph version 12.2.0-2.el7cp (3137b4f525c5dcc2a34fef5b0f6bcf4477312db9) luminous (rc), process (unknown), pid 12479 -----------
Can you share your ceph.conf? This is probably a client config that does put the log in /var/run/ceph
Created attachment 1324979 [details] File contains all.yml and ansible-playbook log $ cat /etc/ceph/12_luminous.conf # Please do not change this file directly since it is managed by Ansible and will be overwritten [global] fsid = 34652d30-4cf9-432c-b7df-da63395422eb max open files = 131072 mon initial members = magna097 mon host = <appropriate_mon_address> public network = <appropriate_osd_address> cluster network = <appropriate_osd_address> Regards, Vasishta
Hum I don't see how it's a ceph-ansible bug. There is no client section so I think you're getting the default. Looks like a ceph-disk thing to me: https://github.com/ceph/ceph/blob/master/src/ceph-disk/ceph_disk/main.py#L1532-L1554
As I said, I think the issue is in ceph-disk, if you know how we can fix this in ceph-ansible let me know. For now, this is not a ceph-ansible bug, so either change the component or re-open here if you think it's necessary. Thanks!
upstream PR: https://github.com/ceph/ceph/pull/18375
upstream PR merged
Will be upstream in v12.2.6. Priority: Low -> re-targeting to RHCS 3.2.
Working fine with 12.2.8-36 . Moving to VERIFIED state. Regards, Vasishta Shastry QE, Ceph
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0020