Verified in OSP14 2019-02-22.2 puddle. [stack@undercloud-0 ~]$ cat /etc/yum.repos.d/latest-installed 14 -p 2019-02-22.2 [stack@undercloud-0 ~]$ rpm -qa | grep openstack-tripleo-heat-templates openstack-tripleo-heat-templates-9.2.1-0.20190119154860.fe11ade.el7ost.noarch [stack@undercloud-0 ~]$ rpm -qa | grep puppet-octavia puppet-octavia-13.3.1-0.20181013113434.e19b590.el7ost.noarch - octavia_health_manager container: openstack-octavia-health-manager-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch openstack-octavia-common-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch - octavia_api container: openstack-octavia-api-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch openstack-octavia-common-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch - octavia_housekeeping container: openstack-octavia-housekeeping-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch openstack-octavia-common-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch - octavia_worker container: openstack-octavia-worker-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch openstack-octavia-common-3.0.2-0.20181219195054.ec4c88e.el7ost.noarch Verification steps: 1. Deploy OSP 14 with octavia in a hybrid environment ยท Set in customized Jenkins job OVERCLOUD_DEPLOY_OVERRIDE_OPTIONS: --config-heat OctaviaTimeoutClientData=1200000 \ --config-heat OctaviaTimeoutMemberData=1200000 2. Check the new timeouts values are reflected in configuration (in the controller): /var/lib/config-data/octavia/etc/puppet/hieradata/service_configs.json: "octavia::worker::timeout_client_data": 1200000, /var/lib/config-data/octavia/etc/puppet/hieradata/service_configs.json: "octavia::worker::timeout_member_connect": 5000, /var/lib/config-data/octavia/etc/puppet/hieradata/service_configs.json: "octavia::worker::timeout_member_data": 1200000, /var/lib/config-data/octavia/etc/puppet/hieradata/service_configs.json: "octavia::worker::timeout_tcp_inspect": 0, 3. Deploy OSP 3.11 with Kuryr and check the cluster is ready and all the pods are running: [openshift@master-0 ~]$ oc get nodes NAME STATUS ROLES AGE VERSION app-node-0.openshift.example.com Ready compute 12h v1.11.0+d4cacc0 app-node-1.openshift.example.com Ready compute 12h v1.11.0+d4cacc0 infra-node-0.openshift.example.com Ready infra 12h v1.11.0+d4cacc0 master-0.openshift.example.com Ready master 12h v1.11.0+d4cacc0 [openshift@master-0 ~]$ oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default docker-registry-1-4p562 1/1 Running 0 12h default kuryr-pod-1259287626 1/1 Running 0 12h default registry-console-1-2j7fm 1/1 Running 0 12h default router-1-4zkl7 1/1 Running 0 12h kube-system master-api-master-0.openshift.example.com 1/1 Running 0 12h kube-system master-controllers-master-0.openshift.example.com 1/1 Running 0 12h kube-system master-etcd-master-0.openshift.example.com 1/1 Running 1 12h kuryr-namespace-1317222661 kuryr-pod-1964655624 1/1 Running 0 11h kuryr-namespace-820599586 kuryr-pod-362347688 1/1 Running 0 11h openshift-console console-6975575759-fr8fm 1/1 Running 0 12h openshift-infra kuryr-cni-ds-74snc 2/2 Running 0 12h openshift-infra kuryr-cni-ds-glpxx 2/2 Running 0 12h openshift-infra kuryr-cni-ds-krg6p 2/2 Running 0 12h openshift-infra kuryr-cni-ds-q9srf 2/2 Running 0 12h openshift-infra kuryr-controller-6c6d965c54-fvprg 1/1 Running 0 11h openshift-monitoring alertmanager-main-0 3/3 Running 0 12h openshift-monitoring alertmanager-main-1 3/3 Running 0 12h openshift-monitoring alertmanager-main-2 3/3 Running 0 12h openshift-monitoring cluster-monitoring-operator-75c6b544dd-5gsqp 1/1 Running 0 12h openshift-monitoring grafana-c7d5bc87c-849lb 2/2 Running 0 12h openshift-monitoring kube-state-metrics-6c64799586-kdkdm 3/3 Running 0 12h openshift-monitoring node-exporter-2rz7h 2/2 Running 0 12h openshift-monitoring node-exporter-9jrh5 2/2 Running 0 12h openshift-monitoring node-exporter-ff9sz 2/2 Running 0 12h openshift-monitoring node-exporter-vzcvh 2/2 Running 0 12h openshift-monitoring prometheus-k8s-0 4/4 Running 1 12h openshift-monitoring prometheus-k8s-1 4/4 Running 1 12h openshift-monitoring prometheus-operator-5b47ff445b-59lrd 1/1 Running 0 12h openshift-node sync-bpnjw 1/1 Running 0 12h openshift-node sync-fgffz 1/1 Running 0 12h openshift-node sync-mv8qw 1/1 Running 0 12h openshift-node sync-vc72l 1/1 Running 0 12h
Correction over my previous comment: The new timeouts values should be reflected in /var/lib/config-data/octavia/etc/octavia/octavia.conf as well, and are not. Correct behaviour should reflect: /var/lib/config-data/octavia/etc/octavia/octavia.conf:timeout_client_data=1200000 /var/lib/config-data/octavia/etc/octavia/octavia.conf:timeout_member_connect=5000 /var/lib/config-data/octavia/etc/octavia/octavia.conf:timeout_member_data=1200000 /var/lib/config-data/octavia/etc/octavia/octavia.conf:timeout_tcp_inspect=0 Moving the BZ back to assigned.
Closing as WONTFIX. OSP 14, a 1-year support product, is fast approaching EOL and the team does not have the capacity to fix it in time until the next and last OSP 14 zstream. There are no customer cases attached nor anyone expressed urgency here to have this BZ resolved. It is worth noting, though, that the fix is available in OSP 13 as well as in OSP 15 and on.