Bug 1503838

Summary: Compute in the templates the Ceph public and cluster subnets when using routed storage networks
Product: Red Hat OpenStack Reporter: John Fulton <johfulto>
Component: openstack-tripleo-heat-templatesAssignee: Harald Jensås <hjensas>
Status: CLOSED DUPLICATE QA Contact: Yogev Rabl <yrabl>
Severity: medium Docs Contact:
Priority: medium    
Version: 12.0 (Pike)CC: dbecker, dsneddon, gfidente, hjensas, jefbrown, lmarsh, mburns, morazi, psanchez, racedoro, rhel-osp-director-maint, sasha, scohen
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1547088 (view as bug list) Environment:
Last Closed: 2019-07-24 15:20:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description John Fulton 2017-10-18 21:24:55 UTC
This is a tracker bug for DFG:Ceph to test OSPd Support for deployment of Ceph with routed storage networks as created by BZ 1406102.

Comment 3 Alexander Chuzhoy 2017-10-23 14:06:20 UTC
Note https://bugzilla.redhat.com/show_bug.cgi?id=1504207

Comment 4 Giulio Fidente 2017-10-24 11:35:40 UTC
*** Bug 1504207 has been marked as a duplicate of this bug. ***

Comment 5 Ramon Acedo 2017-10-24 16:43:11 UTC
This is currently blocking the composable networks with L3 spine/leaf and storage nodes. Is there any chance we can work it around? 

A similar limitation was found with the compute roles which is worked around by passing hiera data in ExtraConfig like this:

 parameter_defaults:
   ComputeRack1ExtraConfig:
     nova::vncproxy::host: "%{hiera('rack1_internal_api')}"
   ComputeRack2ExtraConfig:
     nova::vncproxy::host: "%{hiera('rack2_internal_api')}"

Steve Hardy has created a patch to allow this: https://review.openstack.org/#/c/514707/

Could we do the same with the Ceph storage nodes?

Comment 6 Ramon Acedo 2017-10-25 11:11:56 UTC
From the description of BZ#1504207:

———————— 
We need to support a list of subnets. The "working" ceph.conf would list all defined storage and storagemgmt subnets:
[root@overcloud-cephstorage3-0 ~]# cat /etc/ceph/ceph.conf |grep -e network
cluster network = 172.120.4.0/24,172.117.4.0/24,172.118.4.0/24,172.119.4.0/24
public network = 172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24
———————— 

I can see this in the Puppet module for Ceph:

/usr/share/openstack-puppet/modules/ceph/manifests/profile/params.pp
[…]
# [*cluster_network*] The address of the cluster network.
#   Optional. {cluster-network-ip/netmask}
#
# [*public_network*] The address of the public network.
#   Optional. {public-network-ip/netmask}
[…]

Each leaf will have its own storage management and storage networks. The fix that Sasha found was adding all of them to all the Ceph nodes in each leaf:

- Is adding all of them right? It works, but just asking in case we might only need the ones of the subnet (leaf) the Ceph node is on.

- If it is, my understanding is that we should be able to pass them via hiera using the same technique we do with the compute nodes using ExtraConfig described in comment #5

Comment 7 Giulio Fidente 2017-10-25 11:22:22 UTC
With the OSP12 codebase, the only solution/workaround I can think of is to provide the subnets via environment file like this:

parameter_defaults:
  CephAnsibleExtraConfig:
    public_network: '172.120.3.0/24,172.117.3.0/24,172.118.3.0/24,172.119.3.0/24'
    cluster_network: '172.120.4.0/24,172.117.4.0/24,172.118.4.0/24,172.119.4.0/24'

NOTE: OSP12 does not use puppet-ceph for the deployment of the ceph cluster but ceph-ansible

Comment 23 Harald Jensås 2018-11-19 17:38:53 UTC
Hi,

I think we can keep this bug around, we are tracking the work for routed networks in https://bugzilla.redhat.com/show_bug.cgi?id=1601576 but it's a broad topic.

My gerrit review that was added to this bug should close this once it lands.

I will re-assign this bug to me.



--
Harald

Comment 29 John Fulton 2019-07-24 15:20:39 UTC

*** This bug has been marked as a duplicate of bug 1601576 ***