Bug 1651395 - [RHCS 2.5z3 - osds failed to come up / cephdisk + collocated]
Summary: [RHCS 2.5z3 - osds failed to come up / cephdisk + collocated]
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 2.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z4
: 2.5
Assignee: Sébastien Han
QA Contact: Vasu Kulkarni
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-19 22:25 UTC by Vasu Kulkarni
Modified: 2018-11-27 20:22 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-27 20:22:49 UTC
Embargoed:


Attachments (Terms of Use)

Description Vasu Kulkarni 2018-11-19 22:25:34 UTC
Description of problem:

1) setup a cluster using 2.5z3 build

Versions:
ceph-ansible-3.0.47-1.el7cp
ceph-10.2.10-43.el7cp

Playbook runs successfully but the OSDs dont come up - (it should have failed and complained here - it continues and reports success)

Group Vars:
ceph_conf_overrides:
  global:
    mon_max_pg_per_osd: 1024
    osd_default_pool_size: 2
    osd_pool_default_pg_num: 64
    osd_pool_default_pgp_num: 64
  mon:
    mon_allow_pool_delete: true
ceph_docker_image: rhceph
ceph_docker_image_tag: ceph-2-rhel-7-containers-candidate-69955-20181031213505
ceph_docker_registry: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
ceph_origin: distro
ceph_repository: rhcs
ceph_stable: true
ceph_stable_release: jewel
ceph_stable_rh_storage: true
ceph_test: true
cephfs_pools:
- name: cephfs_data
  pgs: '8'
- name: cephfs_metadata
  pgs: '8'
copy_admin_key: true
fetch_directory: ~/fetch
journal_size: 1024
osd_auto_discovery: false
osd_scenario: collocated
public_network: 172.16.0.0/12


     health HEALTH_ERR
            256 pgs are stuck inactive for more than 300 seconds
            256 pgs stuck inactive
            256 pgs stuck unclean
            no osds
     monmap e1: 3 mons at {ceph-jenkins-build-1542654237369-node3-mon=172.16.115.83:6789/0,ceph-jenkins-build-1542654237369-node7-mon=172.16.115.99:6789/0,ceph-jenkins-build-1542654237369-node8-mon=172.16.115.66:6789/0}
            election epoch 6, quorum 0,1,2 ceph-jenkins-build-1542654237369-node8-mon,ceph-jenkins-build-1542654237369-node3-mon,ceph-jenkins-build-1542654237369-node7-mon
      fsmap e2: 0/0/1 up
     osdmap e5: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds,recovery_deletes
      pgmap v6: 256 pgs, 4 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 256 creating

Full logs:
http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1542654237369/ceph_ansible_0.log


Note You need to log in before you can comment on or make changes to this bug.