This is a tracker bug for DFG:Ceph to test OSPd Support for deployment of Ceph with routed storage networks as created by BZ 1406102.
Note https://bugzilla.redhat.com/show_bug.cgi?id=1504207
*** Bug 1504207 has been marked as a duplicate of this bug. ***
This is currently blocking the composable networks with L3 spine/leaf and storage nodes. Is there any chance we can work it around? A similar limitation was found with the compute roles which is worked around by passing hiera data in ExtraConfig like this: parameter_defaults: ComputeRack1ExtraConfig: nova::vncproxy::host: "%{hiera('rack1_internal_api')}" ComputeRack2ExtraConfig: nova::vncproxy::host: "%{hiera('rack2_internal_api')}" Steve Hardy has created a patch to allow this: https://review.openstack.org/#/c/514707/ Could we do the same with the Ceph storage nodes?
From the description of BZ#1504207: ———————— We need to support a list of subnets. The "working" ceph.conf would list all defined storage and storagemgmt subnets: [root@overcloud-cephstorage3-0 ~]# cat /etc/ceph/ceph.conf |grep -e network cluster network = 172.120.4.0/24,172.117.4.0/24,172.118.4.0/24,172.119.4.0/24 public network = 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24 ———————— I can see this in the Puppet module for Ceph: /usr/share/openstack-puppet/modules/ceph/manifests/profile/params.pp […] # [*cluster_network*] The address of the cluster network. # Optional. {cluster-network-ip/netmask} # # [*public_network*] The address of the public network. # Optional. {public-network-ip/netmask} […] Each leaf will have its own storage management and storage networks. The fix that Sasha found was adding all of them to all the Ceph nodes in each leaf: - Is adding all of them right? It works, but just asking in case we might only need the ones of the subnet (leaf) the Ceph node is on. - If it is, my understanding is that we should be able to pass them via hiera using the same technique we do with the compute nodes using ExtraConfig described in comment #5
With the OSP12 codebase, the only solution/workaround I can think of is to provide the subnets via environment file like this: parameter_defaults: CephAnsibleExtraConfig: public_network: '172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24' cluster_network: '172.120.4.0/24,172.117.4.0/24,172.118.4.0/24,172.119.4.0/24' NOTE: OSP12 does not use puppet-ceph for the deployment of the ceph cluster but ceph-ansible
Hi, I think we can keep this bug around, we are tracking the work for routed networks in https://bugzilla.redhat.com/show_bug.cgi?id=1601576 but it's a broad topic. My gerrit review that was added to this bug should close this once it lands. I will re-assign this bug to me. -- Harald
*** This bug has been marked as a duplicate of bug 1601576 ***