Bug 1540578 - Client creation fails for containerized deployment via ansible
Summary: Client creation fails for containerized deployment via ansible
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: z1
: 3.0
Assignee: Guillaume Abrioux
QA Contact: Valerii Shevchenko
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-31 12:54 UTC by Valerii Shevchenko
Modified: 2022-02-21 18:19 UTC (History)
10 users (show)

Fixed In Version: RHEL: ceph-ansible-3.0.27-1.el7cp Ubuntu: ceph-ansible_3.0.27-2redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-03-08 15:52:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 2397 0 None None None 2018-02-16 08:10:39 UTC
Red Hat Product Errata RHBA-2018:0474 0 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix update 2018-03-08 20:51:53 UTC

Description Valerii Shevchenko 2018-01-31 12:54:02 UTC
Description of problem:
Client deployments are failing during containerized deployment via ansible for 3.x

Version-Release number of selected component:
Red Hat Ceph 3.0 for Red Hat Enterprise Linux 7

How reproducible:
Can be reproducible with automation script or manual deployment using ansible docker playbook on RHEL following this manual: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/container_guide/deploying-red-hat-ceph-storage-in-containers

Configuration files:
all.yaml:
ceph_conf_overrides:
  global:
    mon_max_pg_per_osd: 1024
    osd_default_pool_size: 2
    osd_pool_default_pg_num: 64
    osd_pool_default_pgp_num: 64
  mon:
    mon_allow_pool_delete: true
ceph_docker_image: rhceph/rhceph-3-rhel7
ceph_docker_registry: registry.access.redhat.com
ceph_origin: distro
ceph_repository: rhcs
ceph_stable: true
ceph_stable_release: luminous
ceph_stable_rh_storage: true
ceph_test: true
containerized_deployment: true
copy_admin_key: true
journal_size: 1024
osd_auto_discovery: false
osd_scenario: collocated
public_network: 172.16.0.0/12

hosts file:
[mons]
ceph-jenkins-build-run417-node5-osd monitor_interface=eth0
ceph-jenkins-build-run417-node4-osd monitor_interface=eth0
ceph-jenkins-build-run417-node6-osd monitor_interface=eth0
[mgrs]
ceph-jenkins-build-run417-node5-osd monitor_interface=eth0
ceph-jenkins-build-run417-node4-osd monitor_interface=eth0
ceph-jenkins-build-run417-node6-osd monitor_interface=eth0
[osds]
ceph-jenkins-build-run417-node5-osd monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde"]'
ceph-jenkins-build-run417-node4-osd monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde"]'
ceph-jenkins-build-run417-node6-osd monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde"]'
[mdss]
ceph-jenkins-build-run417-node8-mds monitor_interface=eth0
ceph-jenkins-build-run417-node7-mds monitor_interface=eth0
[rgws]
ceph-jenkins-build-run417-node10-rgw radosgw_interface=eth0
ceph-jenkins-build-run417-node9-rgw radosgw_interface=eth0
[clients]
ceph-jenkins-build-run417-node3-client client_interface=eth0
ceph-jenkins-build-run417-node2-client client_interface=eth0


Steps to Reproduce:
1.create environment to statisfy hosts file (RHEL)
2.install ceph-ansible to installer node
3.update ansible configuration with provided hosts file and all.yaml content
4.run docker playbook

or

try https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/RHCS%203.x/job/wip-vshevche-ceph-containerized-ansible-sanity-3.x/ Jenkins job

Actual results:
TASK [ceph-client : copy ceph admin keyring] ***********************************
task path: /home/cephuser/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:23

fatal: [ceph-jenkins-build-run417-node3-client]: FAILED! => {"changed": false, "checksum": "7fe2f93acdc57574c0ba8a79033dea6877b8a852", "gid": 0, "group": "root", "mode": "0644", "msg": "chown failed: failed to look up user ceph", "owner": "root", "path": "/etc/ceph/ceph.client.admin.keyring", "secontext": "system_u:object_r:etc_t:s0", "size": 159, "state": "file", "uid": 0}

fatal: [ceph-jenkins-build-run417-node2-client]: FAILED! => {"changed": false, "checksum": "7fe2f93acdc57574c0ba8a79033dea6877b8a852", "gid": 0, "group": "root", "mode": "0644", "msg": "chown failed: failed to look up user ceph", "owner": "root", "path": "/etc/ceph/ceph.client.admin.keyring", "secontext": "system_u:object_r:etc_t:s0", "size": 159, "state": "file", "uid": 0}

[...]

PLAY RECAP *********************************************************************
ceph-jenkins-build-run417-node10-rgw : ok=58   changed=10   unreachable=0    failed=0   
ceph-jenkins-
build-run417-node2-client : ok=43   changed=8    unreachable=0    failed=1   
ceph-jenkins-build-run417-node3-client : ok=45   changed=8    unreachable=0    failed=1   
ceph-jenkins-build-run417-node4-osd : ok=211  changed=26   unreachable=0    failed=0   
ceph-jenkins-build-run417-node5-osd : ok=216  changed=28   unreachable=0    failed=0   
ceph-jenkins-build-run417-node6-osd : ok=217  changed=27   unreachable=0    failed=0   
ceph-jenkins-build-run417-node7-mds : ok=54   changed=11   unreachable=0    failed=0   
ceph-jenkins-build-run417-node8-mds : ok=56   changed=11   unreachable=0    failed=0   
ceph-jenkins-build-run417-node9-rgw : ok=56   changed=10   unreachable=0    failed=0   

Full log: https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/RHCS%203.x/job/wip-vshevche-ceph-containerized-ansible-sanity-3.x/1/consoleFull

Expected results:

Client nodes are deployed and configured without fails. No fails in play recap.

Additional info:
Cleint deployment behavior is not clearly specified in the containerized deployment manual and considered the same as for non-containerized deployment 3.x

Comment 9 Valerii Shevchenko 2018-03-02 14:20:21 UTC
PLAY RECAP *********************************************************************
ceph-vshevchenk-run611-node10-client : ok=57   changed=9    unreachable=0    failed=0
ceph-vshevchenk-run611-node2-osdmonmgr : ok=218  changed=26   unreachable=0    failed=0
ceph-vshevchenk-run611-node3-osdrgw : ok=148  changed=16   unreachable=0    failed=0
ceph-vshevchenk-run611-node4-osdmds : ok=148  changed=17   unreachable=0    failed=0
ceph-vshevchenk-run611-node5-monmgr : ok=136  changed=18   unreachable=0    failed=0
ceph-vshevchenk-run611-node6-monmgr : ok=143  changed=19   unreachable=0    failed=0
ceph-vshevchenk-run611-node7-mds : ok=64   changed=11   unreachable=0    failed=0
ceph-vshevchenk-run611-node8-rgw : ok=68   changed=10   unreachable=0    failed=0
ceph-vshevchenk-run611-node9-osd : ok=84   changed=12   unreachable=0    failed=0

Comment 12 errata-xmlrpc 2018-03-08 15:52:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0474


Note You need to log in before you can comment on or make changes to this bug.