Bug 1337674 - [ceph-ansible]: UBUNTU : ceph-ansible fails to install mds packages
Summary: [ceph-ansible]: UBUNTU : ceph-ansible fails to install mds packages
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 2
Assignee: Andrew Schoen
QA Contact: Rachana Patel
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-19 19:10 UTC by Rachana Patel
Modified: 2016-08-23 19:51 UTC (History)
7 users (show)

Fixed In Version: ceph-ansible-1.0.5-15.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:51:23 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Rachana Patel 2016-05-19 19:10:58 UTC
Created attachment 1159611 [details]
command output

Description of problem:
=======================
ceph-ansible is not installing mdss package on ubuntu cluster and fails as below

TASK: [ceph-mds | start and add that the metadata service to the init sequence (systemd after hammer)] *** 
<magna090> REMOTE_MODULE service name= state=started
failed: [magna090] => {"changed": false, "failed": true}
msg: Error when trying to enable ceph-mds@magna090: rc=1 Failed to execute operation: No such file or directory



Version-Release number of selected component (if applicable):
==============================================================
10.2.1-2redhat1xenial



How reproducible:
=================
always


Steps to Reproduce:
===================
1.perform perquisite on all ubuntu node
2. inventory file
[ubuntu@magna051 ceph-ansible]$ sudo cat /etc/ansible/hostsU
[mons]
magna063

[osds]
magna084
magna085
magna090

[mdss]
magna090


3. run below command
ansible-playbook site.yml -vv -i  /etc/ansible/hostsU  --extra-vars '{"ceph_stable": true, "ceph_origin": "distro", "ceph_stable_rh_storage": true, "monitor_interface": "eth0", "journal_collocation": true, "devices": ["/dev/sdb", "/dev/sdc", "/dev/sdd"], "journal_size": 100, "public_network": "xxxx/xx", "fetch_directory": "~/ubuntu-key"}' -u ubuntu


Actual results:
==============
TASK: [ceph-mds | enable systemd unit file for mds instance (for or after infernalis)] *** 
<magna090> REMOTE_MODULE command systemctl enable 
ok: [magna090] => {"changed": false, "cmd": ["systemctl", "enable", "ceph-mds@magna090"], "delta": "0:00:00.004860", "end": "2016-05-19 18:58:32.651459", "failed": false, "failed_when_result": false, "rc": 1, "start": "2016-05-19 18:58:32.646599", "stderr": "Failed to execute operation: No such file or directory", "stdout": "", "warnings": []}

TASK: [ceph-mds | start and add that the metadata service to the init sequence (upstart)] *** 
skipping: [magna090]

TASK: [ceph-mds | start and add that the metadata service to the init sequence (systemd before infernalis)] *** 
skipping: [magna090]

TASK: [ceph-mds | start and add that the metadata service to the init sequence (systemd after hammer)] *** 
<magna090> REMOTE_MODULE service name= state=started
failed: [magna090] => {"changed": false, "failed": true}
msg: Error when trying to enable ceph-mds@magna090: rc=1 Failed to execute operation: No such file or directory


FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/ubuntu/site.retry

magna063                   : ok=85   changed=0    unreachable=0    failed=0   
magna084                   : ok=158  changed=1    unreachable=0    failed=0   
magna085                   : ok=158  changed=1    unreachable=0    failed=0   
magna090                   : ok=224  changed=4    unreachable=0    failed=1    




Expected results:
=================
It should install mds package and start service


Additional info:
==================
mds package was not installed on node

Comment 2 Andrew Schoen 2016-05-19 19:14:50 UTC
I've made a PR upstream to fix this: https://github.com/ceph/ceph-ansible/pull/797

Comment 6 Rachana Patel 2016-07-28 20:33:52 UTC
verified with 

ceph-ansible-1.0.5-31.el7scon.noarch


- working as expected hence moving to verified.

ubuntu@magna100:~$ dpkg --list | grep ceph
ii  ceph-base                            10.2.2-23redhat1xenial          amd64        common ceph daemon libraries and management tools
ii  ceph-common                          10.2.2-23redhat1xenial          amd64        common utilities to mount and interact with a ceph storage cluster
ii  ceph-fs-common                       10.2.2-23redhat1xenial          amd64        common utilities to mount and interact with a ceph file system
ii  ceph-fuse                            10.2.2-23redhat1xenial          amd64        FUSE-based client for the Ceph distributed file system
ii  ceph-mds                             10.2.2-23redhat1xenial          amd64        metadata server for the ceph distributed file system
ii  libcephfs1                           10.2.2-23redhat1xenial          amd64        Ceph distributed file system client library
ii  python-cephfs                        10.2.2-23redhat1xenial          amd64        Python libraries for the Ceph libcephfs library

Comment 8 errata-xmlrpc 2016-08-23 19:51:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.