Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1520004

Summary: per host CephAnsibleDisksConfig are ignored
Product: Red Hat OpenStack Reporter: Gonéri Le Bouder <goneri>
Component: openstack-tripleo-heat-templatesAssignee: Giulio Fidente <gfidente>
Status: CLOSED ERRATA QA Contact: Yogev Rabl <yrabl>
Severity: high Docs Contact:
Priority: high    
Version: 12.0 (Pike)CC: arkady_kanevsky, athomas, bloch, ebarrera, emacchi, gael_rehault, gfidente, johfulto, jthomas, lbopf, markmc, mburns, mcornea, mflusche, morazi, nalmond, pablo.iranzo, rhel-osp-director-maint, smerrow, yrabl
Target Milestone: z1Keywords: Triaged, ZStream
Target Release: 12.0 (Pike)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-heat-templates-7.0.3-20.el7ost, openstack-tripleo-common-7.6.3-9.el7ost Doc Type: Known Issue
Doc Text:
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-01-30 21:24:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1394885, 1516324    

Description Gonéri Le Bouder 2017-12-01 21:59:57 UTC
Description of problem:

I've the following template to do the mapping of my disks:

resource_registry:                                                                                                    
  OS::TripleO::CephStorageExtraConfigPre: ./overcloud/puppet/extraconfig/pre_deploy/per_node.yaml                     
                                                                                                                      
parameter_defaults:                                                                                                   
  NodeDataLookup: >                                                                                                   
    {                                                                                                                 
      "4C4C4544-0047-3610-8031-C8C04F4A4B32": {                                                                       
        "CephAnsibleDisksConfig": {                                                                                   
          "dedicated_devices": [                                                                                      
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",                                        
(...)                                      
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0"                                         
          ],
          "devices": [                                                                                                
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0"
          ],
          "osd_scenario": "non-collocated"
        }
      },
      "4C4C4544-0047-3610-8053-C8C04F484B32": {
        "CephAnsibleDisksConfig": {
          "dedicated_devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0"
          ],
          "devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0"
          ],
          "osd_scenario": "non-collocated"
        }
      },
      "4C4C4544-0047-3610-8054-C8C04F484B32": {
        "CephAnsibleDisksConfig": {
          "dedicated_devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0"
          ],
          "devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0",
          ],
          "osd_scenario": "non-collocated"
        }
      }
    }



And the following extra template:

resource_registry:
    OS::TripleO::NodeUserData: ./first-boot.yaml

parameter_defaults:
  NovaEnableRbdBackend: true

  CephConfigOverrides:
    journal_size: 10000
    journal_collocation: false
    raw_multi_journal: true


If I go on the first ceph node, I can validate the data are propagated properly with:

cat /etc/puppet/hieradata/4C4C4544-0047-3610-8053-C8C04F484B32.json |jq .

However, when I check /var/log/mistral/ceph-install-workflow.log, ceph-ansible only know about /dev/vda and ignore my CephAnsibleDisksConfig key.


Version-Release number of selected component (if applicable):

Puddle:  RH7-RHOS-12.0 2017-11-28.3 
ceph-ansible-3.0.14-1.el7cp.noarch

Comment 5 Gonéri Le Bouder 2017-12-04 13:58:54 UTC
This set up comes with 3 ceph nodes. Each of them have 3 SSD and 12 SAS disk. Every-time I do a deployment, at last one of the 3 nodes get a disk renamed. We won't be able to do any deployment anymore with the director installer.

So, unless we go through a manual deployment, I don't see any other option to reliably get a working Ceph deployment. I think this will have quite a huge impact and should be mention in the documentation.

Comment 12 Gonéri Le Bouder 2017-12-06 14:20:11 UTC
osd_auto_discovery would not help here because the problem happens after the first reboot and before the Ceph deployment.

Comment 15 Mike Orazi 2017-12-14 19:49:24 UTC
This is blocking partner integration activity.  Accordingly I would like to request a hot fix to allow testing to proceed.

Comment 18 Jon Schlueter 2018-01-09 18:43:44 UTC
Build openstack-tripleo-heat-templates-7.0.3-20.el7ost includes a patch from this bug, please update BZ state accordingly

Comment 19 Jon Schlueter 2018-01-09 18:45:21 UTC
Build openstack-tripleo-common-7.6.3-9.el7ost contains a patch from this bug please update bz accordingly

Comment 31 Gonéri Le Bouder 2018-01-24 20:17:49 UTC
Hum, there is something wrong with my set up. My configuration is always ignored but looks good according to the doc [0]. ceph-ansible still tries to access /dev/vdb.

[0]: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_specific_hieradata.html


This is my file:

parameter_defaults:
  NodeDataLookup: >
    {
      "4C4C4544-0047-3610-8031-C8C04F4A4B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fcb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      },
      "4C4C4544-0047-3610-8053-C8C04F484B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5ca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5cb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      },
      "4C4C4544-0047-3610-8054-C8C04F484B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdcb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      }
    }

Comment 32 Gonéri Le Bouder 2018-01-24 20:20:01 UTC
Please ignore my previous comment, I was stuck with an old puddle (RH7-RHOS-12.0 2017-12-01.4).

Comment 35 Yogev Rabl 2018-01-29 23:42:27 UTC
The verification failed. Ceph ansible failed to deploy the Ceph cluster with the given configuration of the OSDs

Comment 38 Yogev Rabl 2018-01-30 14:14:03 UTC
Verified with the following configuration:

    NodeDataLookup: |
        {"4929BFB8-0ED4-48D7-B34F-9AD615E96112": {"devices": ["/dev/vdb", "/dev/vdc"], "osd_scenario": "collocated"},
        "9EFD920F-FC86-4AA4-BBD5-CBD075999C6D": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"},
        "5CCE1DF9-0905-4B7C-A0B9-FEDDB19191C8": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"}}

Comment 41 errata-xmlrpc 2018-01-30 21:24:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0253

Comment 42 John Fulton 2018-07-13 12:35:35 UTC
*** Bug 1600856 has been marked as a duplicate of this bug. ***