Bug 1361548

Summary: Missing OSD number for some EC pools
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Martin Kudlej <mkudlej>
Component: UIAssignee: Darshan <dnarayan>
Status: CLOSED ERRATA QA Contact: Martin Kudlej <mkudlej>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2CC: nthomas, sankarshan, vsarmila
Target Milestone: ---   
Target Release: 2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: rhscon-ceph-0.0.39-1.el7scon.x86_64.rpm Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-08-23 19:58:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1353450    
Attachments:
Description Flags
missing number of OSDs for some pools none

Description Martin Kudlej 2016-07-29 11:54:58 UTC
Description of problem:
As you can see at screenshot some pools haven't number of OSD in list. It is true that there is not enough OSDs for 8+4 EC pool but there is enough OSDs for 6+3 EC pool.

Version-Release number of selected component (if applicable):
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.38-1.el7scon.x86_64
rhscon-core-0.0.38-1.el7scon.x86_64
rhscon-core-selinux-0.0.38-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch

How reproducible:
most probably 100%

Steps to Reproduce:
1. create cluster 
2. create all types of EC pools

Actual results:
Missing number of OSD for 6+3 and 8+4 pools.

Expected results:
All types of pool will have number of OSDs.

$ ceph -c /etc/ceph/cl1.conf osd crush tree
[
    {
        "id": -1,
        "name": "default",
        "type": "root",
        "type_id": 10,
        "items": [
            {
                "id": -2,
                "name": "mkudlej-usm1-node1",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 0,
                        "name": "osd.0",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -3,
                "name": "mkudlej-usm1-node2",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 1,
                        "name": "osd.1",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -4,
                "name": "mkudlej-usm1-node3",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 2,
                        "name": "osd.2",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -5,
                "name": "mkudlej-usm1-node4",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 3,
                        "name": "osd.3",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -6,
                "name": "mkudlej-usm2-node1",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 4,
                        "name": "osd.4",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -7,
                "name": "mkudlej-usm2-node2",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 5,
                        "name": "osd.5",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -8,
                "name": "mkudlej-usm2-node3",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 6,
                        "name": "osd.6",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -9,
                "name": "mkudlej-usm2-node4",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 7,
                        "name": "osd.7",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -10,
                "name": "mkudlej-usm2-node5",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 8,
                        "name": "osd.8",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -11,
                "name": "mkudlej-usm2-node6",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 9,
                        "name": "osd.9",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            }
        ]
    }
]

Comment 3 Martin Kudlej 2016-07-29 11:58:02 UTC
Created attachment 1185477 [details]
server logs

Comment 5 Martin Kudlej 2016-08-04 13:27:55 UTC
Created attachment 1187512 [details]
missing number of OSDs for some pools

Comment 6 Martin Kudlej 2016-08-04 13:29:24 UTC
Tested with
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.39-1.el7scon.x86_64
rhscon-core-0.0.39-1.el7scon.x86_64
rhscon-core-selinux-0.0.39-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch
and it works.

Comment 8 errata-xmlrpc 2016-08-23 19:58:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754