Bug 1326128

Summary: osd names being updated incorrectly after crush map update
Product: Red Hat Ceph Storage Reporter: Christina Meno <gmeno>
Component: CalamariAssignee: Christina Meno <gmeno>
Calamari sub component: Back-end QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Status: CLOSED WORKSFORME Docs Contact:
Severity: medium    
Priority: unspecified CC: ceph-eng-bugs, gmeno, hnallurv, kdreyer
Version: 2.0   
Target Milestone: rc   
Target Release: 2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-26 15:25:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Christina Meno 2016-04-11 22:04:58 UTC
Description of problem:

One more observation, one this crush map/rule issue happens, the api /api/v2/cluster/{fsid}/server, lists the osd node names as "general". I feel this is something messed up with conf and crush bucket/map.

Below is a sample output


--------------------------------
[
    {
        "fqdn": "dhcp47-98.lab.eng.blr.redhat.com", 
        "hostname": "dhcp47-98.lab.eng.blr.redhat.com", 
        "services": [
            {
                "fsid": "b95dbe5d-b880-4cd7-bcaf-d97a4f82b185", 
                "type": "mon", 
                "id": "c", 
                "running": true
            }
        ], 
        "frontend_addr": "10.70.47.98", 
        "backend_addr": null, 
        "frontend_iface": null, 
        "backend_iface": null, 
        "managed": true, 
        "last_contact": "2016-04-04T05:43:31.851396+00:00", 
        "boot_time": "2016-03-29T18:22:11+00:00", 
        "ceph_version": "0.94.5-9.el7cp"
    }, 
    {
        "fqdn": "general", 
        "hostname": "general", 
        "services": [
            {
                "fsid": "b95dbe5d-b880-4cd7-bcaf-d97a4f82b185", 
                "type": "osd", 
                "id": "2", 
                "running": true
            }, 
            {
                "fsid": "b95dbe5d-b880-4cd7-bcaf-d97a4f82b185", 
                "type": "osd", 
                "id": "1", 
                "running": true
            }, 
            {
                "fsid": "b95dbe5d-b880-4cd7-bcaf-d97a4f82b185", 
                "type": "osd", 
                "id": "0", 
                "running": true
            }
        ], 
        "frontend_addr": "10.70.47.95", 
        "backend_addr": "10.70.47.95", 
        "frontend_iface": null, 
        "backend_iface": null, 
        "managed": false, 
        "last_contact": null, 
        "boot_time": null, 
        "ceph_version": null
    }
]
--------------------------------

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Christina Meno 2016-04-27 15:58:34 UTC
Steps to reproduce:

POST request to create a crush node

POST http://10.70.46.139:8002/api/v2/cluster/deedcb4c-a67a-4997-93a6-92149ad2622a/crush_node

{"bucket_type": "root", "name": "general", "items": [{"id": 0, "weight": 0.0, "pos": 0}]}

result: /api/v2/cluster/{fsid}/server, lists the osd node name as "general"

Comment 3 Christina Meno 2016-05-19 17:14:07 UTC
I can no longer reproduce this. I suspect that it was fixed by an recent commit.
I'll test using another setup and close if that remains the case.
Either way I should have a fix by the 20th May

Comment 4 Ken Dreyer (Red Hat) 2016-05-24 21:54:41 UTC
Gregory should we close this?