Bug 1322991 - [ceph-ansible] : cluster creation is failing as it is chaging owner to user 'ceph' for '/etc/ceph' and 'ceph' user doesn't exist
Summary: [ceph-ansible] : cluster creation is failing as it is chaging owner to user '...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Installer
Version: 2.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: 2.0
Assignee: Alfredo Deza
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-31 21:21 UTC by Rachana Patel
Modified: 2017-12-13 00:23 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-06 15:34:52 UTC
Target Upstream Version:


Attachments (Terms of Use)
output of command (31.38 KB, text/plain)
2016-03-31 21:21 UTC, Rachana Patel
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible issues 641 0 None closed msg: chown failed: failed to look up user ceph 2020-02-06 06:58:42 UTC

Description Rachana Patel 2016-03-31 21:21:16 UTC
Created attachment 1142373 [details]
output of command

Description of problem:
=======================
cluster creation is failing at below task

TASK: [ceph.ceph-common | create ceph conf directory] ************************* 
<magna048> REMOTE_MODULE file mode=0755 group=ceph state=directory path=/etc/ceph owner=ceph
failed: [magna048] => {"failed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "system_u:object_r:etc_t:s0", "size": 4096, "state": "directory", "uid": 0}
msg: chown failed: failed to look up user ceph

FATAL: all hosts have already failed -- aborting



Version-Release number of selected component (if applicable):
=============================================================
ceph-ansible-1.0.3-1.el7.noarch



How reproducible:
================
always


Steps to Reproduce:
===================
1.install ceph-ansible-1.0.3-1.el7.noarch on installer node
2. enable passwordless ssh from insaller to all node in cluster
3. create host file.
4. disable selinux on all node and create ceph repo file
5. run following from installer node

 ansible-playbook site.yml -vv /etc/ansible/hosts  --extra-vars '{"ceph_stable": true, "ceph_origin": "distro", "ceph_stable_rh_storage": true,"monitor_interface": <interface>, "journal_collocation": true, "devices": ["/dev/sdb"], "journal_size": 100, "public_network": "<subnet mask>"}' -u root


Actual results:
===============
TASK: [ceph.ceph-common | create ceph conf directory] ************************* 
<magna048> REMOTE_MODULE file mode=0755 group=ceph state=directory path=/etc/ceph owner=ceph
failed: [magna048] => {"failed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "system_u:object_r:etc_t:s0", "size": 4096, "state": "directory", "uid": 0}
msg: chown failed: failed to look up user ceph

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/site.retry

magna048                   : ok=38   changed=1    unreachable=0    failed=1   


Expected results:
==================
It should install ceph on cluster


Additional info:
================
complete output of command is attached to bug

Comment 2 Christina Meno 2016-05-04 21:31:02 UTC
Alfredo,

I see that in this BZ we are trying to setup rhcs-1.3

after reviewing 
https://github.com/ceph/ceph-ansible/issues/641

it appears the upstream conclusion was cannot reproduce.

Would you please tell me if I'm missing something?

Comment 3 Ken Dreyer (Red Hat) 2016-05-06 15:34:52 UTC
From the attached log:

  Package ceph.x86_64 1:0.94.5-9.el7cp will be installed

We don't officially support using ceph-ansible with RHCS 1.3. Please use the RHCS 2.0 builds.


Note You need to log in before you can comment on or make changes to this bug.