Created attachment 1159611 [details] command output Description of problem: ======================= ceph-ansible is not installing mdss package on ubuntu cluster and fails as below TASK: [ceph-mds | start and add that the metadata service to the init sequence (systemd after hammer)] *** <magna090> REMOTE_MODULE service name= state=started failed: [magna090] => {"changed": false, "failed": true} msg: Error when trying to enable ceph-mds@magna090: rc=1 Failed to execute operation: No such file or directory Version-Release number of selected component (if applicable): ============================================================== 10.2.1-2redhat1xenial How reproducible: ================= always Steps to Reproduce: =================== 1.perform perquisite on all ubuntu node 2. inventory file [ubuntu@magna051 ceph-ansible]$ sudo cat /etc/ansible/hostsU [mons] magna063 [osds] magna084 magna085 magna090 [mdss] magna090 3. run below command ansible-playbook site.yml -vv -i /etc/ansible/hostsU --extra-vars '{"ceph_stable": true, "ceph_origin": "distro", "ceph_stable_rh_storage": true, "monitor_interface": "eth0", "journal_collocation": true, "devices": ["/dev/sdb", "/dev/sdc", "/dev/sdd"], "journal_size": 100, "public_network": "xxxx/xx", "fetch_directory": "~/ubuntu-key"}' -u ubuntu Actual results: ============== TASK: [ceph-mds | enable systemd unit file for mds instance (for or after infernalis)] *** <magna090> REMOTE_MODULE command systemctl enable ok: [magna090] => {"changed": false, "cmd": ["systemctl", "enable", "ceph-mds@magna090"], "delta": "0:00:00.004860", "end": "2016-05-19 18:58:32.651459", "failed": false, "failed_when_result": false, "rc": 1, "start": "2016-05-19 18:58:32.646599", "stderr": "Failed to execute operation: No such file or directory", "stdout": "", "warnings": []} TASK: [ceph-mds | start and add that the metadata service to the init sequence (upstart)] *** skipping: [magna090] TASK: [ceph-mds | start and add that the metadata service to the init sequence (systemd before infernalis)] *** skipping: [magna090] TASK: [ceph-mds | start and add that the metadata service to the init sequence (systemd after hammer)] *** <magna090> REMOTE_MODULE service name= state=started failed: [magna090] => {"changed": false, "failed": true} msg: Error when trying to enable ceph-mds@magna090: rc=1 Failed to execute operation: No such file or directory FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/ubuntu/site.retry magna063 : ok=85 changed=0 unreachable=0 failed=0 magna084 : ok=158 changed=1 unreachable=0 failed=0 magna085 : ok=158 changed=1 unreachable=0 failed=0 magna090 : ok=224 changed=4 unreachable=0 failed=1 Expected results: ================= It should install mds package and start service Additional info: ================== mds package was not installed on node
I've made a PR upstream to fix this: https://github.com/ceph/ceph-ansible/pull/797
verified with ceph-ansible-1.0.5-31.el7scon.noarch - working as expected hence moving to verified. ubuntu@magna100:~$ dpkg --list | grep ceph ii ceph-base 10.2.2-23redhat1xenial amd64 common ceph daemon libraries and management tools ii ceph-common 10.2.2-23redhat1xenial amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-fs-common 10.2.2-23redhat1xenial amd64 common utilities to mount and interact with a ceph file system ii ceph-fuse 10.2.2-23redhat1xenial amd64 FUSE-based client for the Ceph distributed file system ii ceph-mds 10.2.2-23redhat1xenial amd64 metadata server for the ceph distributed file system ii libcephfs1 10.2.2-23redhat1xenial amd64 Ceph distributed file system client library ii python-cephfs 10.2.2-23redhat1xenial amd64 Python libraries for the Ceph libcephfs library
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1754