Bug 1563280
| Summary: | Expose /var/run/ceph on baremetal for other tools (like collectd) to be able to query | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Giulio Fidente <gfidente> |
| Component: | Ceph-Ansible | Assignee: | Sébastien Han <shan> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Vasishta <vashastr> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.0 | CC: | adeza, aschoen, ceph-eng-bugs, gfidente, gmeno, hnallurv, nthomas, sankarshan, tchandra, vpoliset |
| Target Milestone: | rc | ||
| Target Release: | 3.1 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-ansible-3.1.0-0.1.beta8.el7cp | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-08-27 05:18:17 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1558015, 1578730 | ||
|
Description
Giulio Fidente
2018-04-03 14:06:44 UTC
If you need this now, you can always use 'ceph_osd_docker_extra_env' and set: ceph_osd_docker_extra_env: "-v /var/run/ceph:/var/run/ceph" (In reply to leseb from comment #3) > If you need this now, you can always use 'ceph_osd_docker_extra_env' and set: > > ceph_osd_docker_extra_env: "-v /var/run/ceph:/var/run/ceph" thanks Seb! maybe worth documenting how to use the workaround from Director; people can achieve the above by creating/using a custom environment file with the following contents: parameter_defaults: CephAnsibleExtraConfig: ceph_osd_docker_extra_env: "-v /var/run/ceph:/var/run/ceph" In v3.1.0beta8 and above. @Giulio, will you or Yogev be testing the fix for this? Please let me know. (In reply to Harish NV Rao from comment #6) > @Giulio, will you or Yogev be testing the fix for this? Please let me know. We could but verification is relatively simple without OSP. Basically you'd want to deploy Ceph in containers and then see if on the baremetal node hosting the containers you have the Ceph sockets in /var/run/ceph. For example, colocate some OSDs and/or MONs or MGRs on a single node and then attach the list of files you see in /var/run/ceph on the baremetal node. Do you think you could test this? Verified with version ansible-2.4.5.0-1.el7ae.noarch , ceph-ansible-3.1.0-0.1.rc9.el7cp.noarch Deployed ceph in containers by collocating some daemons output on the baremetal node [ubuntu@magna028 ~]$ ll /var/run/ceph/ total 0 srwxr-xr-x. 1 167 167 0 Jul 4 09:03 ceph-mds.magna028.asok srwxr-xr-x. 1 167 167 0 Jul 4 09:00 ceph-osd.3.asok srwxr-xr-x. 1 167 167 0 Jul 3 09:03 ceph-osd.4.asok srwxr-xr-x. 1 167 167 0 Jul 4 09:00 ceph-osd.5.asok srwxr-xr-x. 1 167 167 0 Jul 3 09:03 ceph-osd.6.asok srwxr-xr-x. 1 167 167 0 Jul 4 09:00 ceph-osd.7.asok srwxr-xr-x. 1 167 167 0 Jul 3 09:03 ceph-osd.8.asok output inside the osd container [ubuntu@magna028 ~]$ sudo docker exec ceph-osd-magna028-sdc ls /var/run/ceph/ ceph-mds.magna028.asok ceph-osd.3.asok ceph-osd.4.asok ceph-osd.5.asok ceph-osd.6.asok ceph-osd.7.asok ceph-osd.8.asok Ceph sockets are available on baremetal node hosting the containers,Hence moving to verfied state |