Description of problem: Collocated mds and osd container leads to deployment fail Version-Release number of selected component (if applicable): ceph-ansible 3.0.25 ceph_docker_image_tag: 2.5-4 also reproducible for: ceph_docker_image_tag: 2.5-3 How reproducible: 100% Steps to Reproduce: 1. Create configuration for containerized deployment with mds and osd container on same host 2.Start deployment Workaround: deploy osd and nds on different node Actual results: PLAY RECAP ********************************************************************* ceph-yshap-run903-node2-osdmon : ok=128 changed=17 unreachable=0 failed=0 ceph-yshap-run903-node3-osdrgw : ok=133 changed=14 unreachable=0 failed=0 ceph-yshap-run903-node4-osd : ok=69 changed=10 unreachable=0 failed=0 ceph-yshap-run903-node5-monmds : ok=113 changed=14 unreachable=0 failed=0 ceph-yshap-run903-node6-monrgw : ok=116 changed=13 unreachable=0 failed=0 ceph-yshap-run903-node7-osdmds : ok=102 changed=11 unreachable=0 failed=1 Expected results: successful deployment Additional info: [root@ceph-yshap-run903-node1-installer ceph-ansible]# cat hosts [mons] ceph-yshap-run903-node2-osdmon monitor_interface=eth0 ceph-yshap-run903-node5-monmds monitor_interface=eth0 ceph-yshap-run903-node6-monrgw monitor_interface=eth0 [osds] ceph-yshap-run903-node3-osdrgw monitor_interface=eth0 devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]' ceph-yshap-run903-node2-osdmon monitor_interface=eth0 devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]' ceph-yshap-run903-node4-osd monitor_interface=eth0 devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]' ceph-yshap-run903-node7-osdmds monitor_interface=eth0 devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]' [mdss] ceph-yshap-run903-node5-monmds monitor_interface=eth0 ceph-yshap-run903-node7-osdmds monitor_interface=eth0 [rgws] ceph-yshap-run903-node3-osdrgw radosgw_interface=eth0 ceph-yshap-run903-node6-monrgw radosgw_interface=eth0 [root@ceph-yshap-run903-node1-installer ceph-ansible]# cat group_vars/all.yml ceph_conf_overrides: global: mon_max_pg_per_osd: 1024 osd_default_pool_size: 2 osd_pool_default_pg_num: 64 osd_pool_default_pgp_num: 64 mon: mon_allow_pool_delete: true ceph_docker_image: rhceph ceph_docker_image_tag: 2.5-4 ceph_docker_registry: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888 ceph_origin: distro ceph_repository: rhcs ceph_stable: true ceph_stable_release: jewel ceph_stable_rh_storage: true ceph_test: true containerized_deployment: true copy_admin_key: true dedicated_devices: - /dev/vde - /dev/vde - /dev/vde fetch_directory: ~/fetch journal_size: 1024 osd_auto_discovery: false osd_scenario: non-collocated public_network: 172.16.0.0/12
log?
Created attachment 1430742 [details] Ansible log
Log was added. Also, I would like to notice, that bug is reproducible only with dedicated journal.
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri