Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1520004 - per host CephAnsibleDisksConfig are ignored
per host CephAnsibleDisksConfig are ignored
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates (Show other bugs)
12.0 (Pike)
Unspecified Unspecified
high Severity high
: z1
: 12.0 (Pike)
Assigned To: Giulio Fidente
Yogev Rabl
: Triaged, ZStream
: 1600856 (view as bug list)
Depends On:
Blocks: 1394885 1516324
  Show dependency treegraph
 
Reported: 2017-12-01 16:59 EST by Gonéri Le Bouder
Modified: 2018-07-16 11:56 EDT (History)
20 users (show)

See Also:
Fixed In Version: openstack-tripleo-heat-templates-7.0.3-20.el7ost, openstack-tripleo-common-7.6.3-9.el7ost
Doc Type: Known Issue
Doc Text:
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-01-30 16:24:32 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1736707 None None None 2017-12-06 06:01 EST
OpenStack gerrit 528283 None stable/pike: MERGED tripleo-common: Add json_parse and yaml_parse mistral expression functions (I9970abae47ca355861e37cdb5db0ab24d564b57a) 2018-01-09 12:55 EST
OpenStack gerrit 528755 None stable/pike: MERGED tripleo-common: Consume NodeDataLookup in ceph-ansible (Ia23825aea938f6f9bcf536e35cad562a1b96c93b) 2018-01-09 12:55 EST
OpenStack gerrit 528757 None stable/pike: NEW tripleo-heat-templates: Passes NodeDataLookup to ceph-ansible workflow (Ie7a9f10f0c821b8c642494a4d3933b2901f39d40) 2018-01-09 12:55 EST
Red Hat Product Errata RHBA-2018:0253 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 director Bug Fix Advisory 2018-02-15 22:41:33 EST

  None (edit)
Description Gonéri Le Bouder 2017-12-01 16:59:57 EST
Description of problem:

I've the following template to do the mapping of my disks:

resource_registry:                                                                                                    
  OS::TripleO::CephStorageExtraConfigPre: ./overcloud/puppet/extraconfig/pre_deploy/per_node.yaml                     
                                                                                                                      
parameter_defaults:                                                                                                   
  NodeDataLookup: >                                                                                                   
    {                                                                                                                 
      "4C4C4544-0047-3610-8031-C8C04F4A4B32": {                                                                       
        "CephAnsibleDisksConfig": {                                                                                   
          "dedicated_devices": [                                                                                      
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",                                        
(...)                                      
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0"                                         
          ],
          "devices": [                                                                                                
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0"
          ],
          "osd_scenario": "non-collocated"
        }
      },
      "4C4C4544-0047-3610-8053-C8C04F484B32": {
        "CephAnsibleDisksConfig": {
          "dedicated_devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0"
          ],
          "devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0"
          ],
          "osd_scenario": "non-collocated"
        }
      },
      "4C4C4544-0047-3610-8054-C8C04F484B32": {
        "CephAnsibleDisksConfig": {
          "dedicated_devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0"
          ],
          "devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0",
          ],
          "osd_scenario": "non-collocated"
        }
      }
    }



And the following extra template:

resource_registry:
    OS::TripleO::NodeUserData: ./first-boot.yaml

parameter_defaults:
  NovaEnableRbdBackend: true

  CephConfigOverrides:
    journal_size: 10000
    journal_collocation: false
    raw_multi_journal: true


If I go on the first ceph node, I can validate the data are propagated properly with:

cat /etc/puppet/hieradata/4C4C4544-0047-3610-8053-C8C04F484B32.json |jq .

However, when I check /var/log/mistral/ceph-install-workflow.log, ceph-ansible only know about /dev/vda and ignore my CephAnsibleDisksConfig key.


Version-Release number of selected component (if applicable):

Puddle:  RH7-RHOS-12.0 2017-11-28.3 
ceph-ansible-3.0.14-1.el7cp.noarch
Comment 5 Gonéri Le Bouder 2017-12-04 08:58:54 EST
This set up comes with 3 ceph nodes. Each of them have 3 SSD and 12 SAS disk. Every-time I do a deployment, at last one of the 3 nodes get a disk renamed. We won't be able to do any deployment anymore with the director installer.

So, unless we go through a manual deployment, I don't see any other option to reliably get a working Ceph deployment. I think this will have quite a huge impact and should be mention in the documentation.
Comment 12 Gonéri Le Bouder 2017-12-06 09:20:11 EST
osd_auto_discovery would not help here because the problem happens after the first reboot and before the Ceph deployment.
Comment 15 Mike Orazi 2017-12-14 14:49:24 EST
This is blocking partner integration activity.  Accordingly I would like to request a hot fix to allow testing to proceed.
Comment 18 Jon Schlueter 2018-01-09 13:43:44 EST
Build openstack-tripleo-heat-templates-7.0.3-20.el7ost includes a patch from this bug, please update BZ state accordingly
Comment 19 Jon Schlueter 2018-01-09 13:45:21 EST
Build openstack-tripleo-common-7.6.3-9.el7ost contains a patch from this bug please update bz accordingly
Comment 31 Gonéri Le Bouder 2018-01-24 15:17:49 EST
Hum, there is something wrong with my set up. My configuration is always ignored but looks good according to the doc [0]. ceph-ansible still tries to access /dev/vdb.

[0]: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_specific_hieradata.html


This is my file:

parameter_defaults:
  NodeDataLookup: >
    {
      "4C4C4544-0047-3610-8031-C8C04F4A4B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fcb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      },
      "4C4C4544-0047-3610-8053-C8C04F484B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5ca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5cb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      },
      "4C4C4544-0047-3610-8054-C8C04F484B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdcb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      }
    }
Comment 32 Gonéri Le Bouder 2018-01-24 15:20:01 EST
Please ignore my previous comment, I was stuck with an old puddle (RH7-RHOS-12.0 2017-12-01.4).
Comment 35 Yogev Rabl 2018-01-29 18:42:27 EST
The verification failed. Ceph ansible failed to deploy the Ceph cluster with the given configuration of the OSDs
Comment 38 Yogev Rabl 2018-01-30 09:14:03 EST
Verified with the following configuration:

    NodeDataLookup: |
        {"4929BFB8-0ED4-48D7-B34F-9AD615E96112": {"devices": ["/dev/vdb", "/dev/vdc"], "osd_scenario": "collocated"},
        "9EFD920F-FC86-4AA4-BBD5-CBD075999C6D": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"},
        "5CCE1DF9-0905-4B7C-A0B9-FEDDB19191C8": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"}}
Comment 41 errata-xmlrpc 2018-01-30 16:24:32 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0253
Comment 42 John Fulton 2018-07-13 08:35:35 EDT
*** Bug 1600856 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.