Bug 1490957
Summary: | [RADOS] - osd-check log file is being created in /var/run/ceph | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasishta <vashastr> | ||||
Component: | Ceph-Disk | Assignee: | Kefu Chai <kchai> | ||||
Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> | ||||
Severity: | low | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 3.0 | CC: | adeza, anharris, aschoen, ceph-eng-bugs, dzafman, flucifre, gmeno, hnallurv, kchai, kdreyer, nthomas, sankarshan, seb, tserlin, vashastr | ||||
Target Milestone: | rc | Keywords: | Reopened | ||||
Target Release: | 3.2 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | RHEL: ceph-12.2.8-3.el7cp Ubuntu: ceph_12.2.8-3redhat1 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-01-03 19:01:20 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Vasishta
2017-09-12 15:07:01 UTC
Can you share your ceph.conf? This is probably a client config that does put the log in /var/run/ceph Created attachment 1324979 [details]
File contains all.yml and ansible-playbook log
$ cat /etc/ceph/12_luminous.conf
# Please do not change this file directly since it is managed by Ansible and will be overwritten
[global]
fsid = 34652d30-4cf9-432c-b7df-da63395422eb
max open files = 131072
mon initial members = magna097
mon host = <appropriate_mon_address>
public network = <appropriate_osd_address>
cluster network = <appropriate_osd_address>
Regards,
Vasishta
Hum I don't see how it's a ceph-ansible bug. There is no client section so I think you're getting the default. Looks like a ceph-disk thing to me: https://github.com/ceph/ceph/blob/master/src/ceph-disk/ceph_disk/main.py#L1532-L1554 As I said, I think the issue is in ceph-disk, if you know how we can fix this in ceph-ansible let me know. For now, this is not a ceph-ansible bug, so either change the component or re-open here if you think it's necessary. Thanks! upstream PR: https://github.com/ceph/ceph/pull/18375 upstream PR merged Will be upstream in v12.2.6. Priority: Low -> re-targeting to RHCS 3.2. Working fine with 12.2.8-36 . Moving to VERIFIED state. Regards, Vasishta Shastry QE, Ceph Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0020 |