Hide Forgot
Description of problem: "ceph" command-line operations (like "ceph health") print a warning to the console because the RBD client admin socket is not present. Version-Release number of selected component (if applicable): ceph-installer-1.0.5-1.el7.noarch ceph-ansible-1.0.5-3.el7.noarch Steps to Reproduce: 1. Set up a cluster with ceph-installer. 2. SSH to a cluster node (mon or OSD). 3. Run "sudo ceph health" Actual results: 2016-04-19 15:20:05.128327 7fc0e4033700 -1 asok(0x7fc0dc001680) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/rbd-clients/ceph-client.admin.20455.140466301441968.asok': (2) No such file or directory HEALTH_OK Expected results: HEALTH_OK (with no warnings) Additional info: This seems to be a regression from a month ago, but I don't see any blindingly-obvious commits in ceph-ansible that would have caused this.
FWIW I tried this with an old build of ceph (10.0.4-2.el7cp) and it still shows up. That indicates it's likely to be in the installer (ceph-ansible)
Commenting out "admin socket" in the "[client]" section makes `ceph health` print HEALTH_OK without any warnings.
A fix for this is in the following PR: https://github.com/ceph/ceph-ansible/pull/721
This is not seen in any of our recent tests. Moving to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1754