Bug 1573286 - Collocated mds and osd container leads to deployment fail
Summary: Collocated mds and osd container leads to deployment fail
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 2.5
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: rc
: 2.*
Assignee: Sébastien Han
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-30 18:17 UTC by Yevhenii Shapovalov
Modified: 2019-08-23 06:18 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-23 06:18:15 UTC
Embargoed:


Attachments (Terms of Use)
Ansible log (2.61 MB, text/plain)
2018-05-03 14:40 UTC, Yevhenii Shapovalov
no flags Details

Description Yevhenii Shapovalov 2018-04-30 18:17:13 UTC
Description of problem:
Collocated mds and osd container leads to deployment fail

Version-Release number of selected component (if applicable):
ceph-ansible 3.0.25
ceph_docker_image_tag: 2.5-4
also reproducible for:
ceph_docker_image_tag: 2.5-3
How reproducible:
100%

Steps to Reproduce:
1. Create configuration for containerized deployment with mds and osd container on same host
2.Start deployment

Workaround:
deploy osd and nds on different node

Actual results:
PLAY RECAP *********************************************************************
ceph-yshap-run903-node2-osdmon : ok=128  changed=17   unreachable=0    failed=0   
ceph-yshap-run903-node3-osdrgw : ok=133  changed=14   unreachable=0    failed=0   
ceph-yshap-run903-node4-osd : ok=69   changed=10   unreachable=0    failed=0   
ceph-yshap-run903-node5-monmds : ok=113  changed=14   unreachable=0    failed=0   
ceph-yshap-run903-node6-monrgw : ok=116  changed=13   unreachable=0    failed=0   
ceph-yshap-run903-node7-osdmds : ok=102  changed=11   unreachable=0    failed=1   

Expected results:
successful deployment

Additional info:
[root@ceph-yshap-run903-node1-installer ceph-ansible]# cat hosts 
[mons]
ceph-yshap-run903-node2-osdmon monitor_interface=eth0
ceph-yshap-run903-node5-monmds monitor_interface=eth0
ceph-yshap-run903-node6-monrgw monitor_interface=eth0
[osds]
ceph-yshap-run903-node3-osdrgw monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]'
ceph-yshap-run903-node2-osdmon monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]'
ceph-yshap-run903-node4-osd monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]'
ceph-yshap-run903-node7-osdmds monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]'
[mdss]
ceph-yshap-run903-node5-monmds monitor_interface=eth0
ceph-yshap-run903-node7-osdmds monitor_interface=eth0
[rgws]
ceph-yshap-run903-node3-osdrgw radosgw_interface=eth0
ceph-yshap-run903-node6-monrgw radosgw_interface=eth0

[root@ceph-yshap-run903-node1-installer ceph-ansible]# cat group_vars/all.yml
ceph_conf_overrides:
  global:
    mon_max_pg_per_osd: 1024
    osd_default_pool_size: 2
    osd_pool_default_pg_num: 64
    osd_pool_default_pgp_num: 64
  mon:
    mon_allow_pool_delete: true
ceph_docker_image: rhceph
ceph_docker_image_tag: 2.5-4
ceph_docker_registry: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
ceph_origin: distro
ceph_repository: rhcs
ceph_stable: true
ceph_stable_release: jewel
ceph_stable_rh_storage: true
ceph_test: true
containerized_deployment: true
copy_admin_key: true
dedicated_devices:
- /dev/vde
- /dev/vde
- /dev/vde
fetch_directory: ~/fetch
journal_size: 1024
osd_auto_discovery: false
osd_scenario: non-collocated
public_network: 172.16.0.0/12

Comment 6 Sébastien Han 2018-05-03 12:47:52 UTC
log?

Comment 7 Yevhenii Shapovalov 2018-05-03 14:40:45 UTC
Created attachment 1430742 [details]
Ansible log

Comment 8 Yevhenii Shapovalov 2018-05-03 14:43:34 UTC
Log was added. Also, I would like to notice, that bug is reproducible only with dedicated journal.

Comment 9 Giridhar Ramaraju 2019-08-05 13:06:30 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 10 Giridhar Ramaraju 2019-08-05 13:09:09 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri


Note You need to log in before you can comment on or make changes to this bug.