Bug 1669078 - Add support for configuring Octavia LB timeouts in OSP 13
Summary: Add support for configuring Octavia LB timeouts in OSP 13
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z5
: 13.0 (Queens)
Assignee: Kamil Sambor
QA Contact: Jon Uriarte
URL:
Whiteboard:
Depends On: 1636496
Blocks: 1623989
TreeView+ depends on / blocked
 
Reported: 2019-01-24 09:38 UTC by Carlos Goncalves
Modified: 2022-07-09 10:32 UTC (History)
3 users (show)

Fixed In Version: openstack-octavia-2.0.2-5.el7ost
Doc Type: Enhancement
Doc Text:
Clone Of: 1636496
Environment:
Last Closed: 2019-03-14 13:33:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-17036 0 None None None 2022-07-09 10:32:12 UTC
Red Hat Product Errata RHSA-2019:0567 0 None None None 2019-03-14 13:33:19 UTC

Comment 11 Jon Uriarte 2019-02-28 15:09:57 UTC
Waiting to [1] to be fixed, as it blocks Openshift installation.


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1684077

Comment 12 Jon Uriarte 2019-03-01 14:51:13 UTC
Verified in OSP13 2019-02-25.2 puddle.

[stack@undercloud-0 ~]$ cat /etc/yum.repos.d/latest-installed 
13  -p 2019-02-25.2

[stack@undercloud-0 ~]$ rpm -qa | grep openstack-tripleo-heat-templates
openstack-tripleo-heat-templates-8.2.0-4.el7ost.noarch

[stack@undercloud-0 ~]$ rpm -qa | grep puppet-octavia
puppet-octavia-12.4.0-8.el7ost.noarch

- octavia_health_manager container:
openstack-octavia-common-2.0.3-2.el7ost.noarch
openstack-octavia-health-manager-2.0.3-2.el7ost.noarch

- octavia_api container:
openstack-octavia-common-2.0.3-2.el7ost.noarch
openstack-octavia-api-2.0.3-2.el7ost.noarch

- octavia_housekeeping container:
openstack-octavia-common-2.0.3-2.el7ost.noarch
openstack-octavia-housekeeping-2.0.3-2.el7ost.noarch

- octavia_worker container:
openstack-octavia-common-2.0.3-2.el7ost.noarch
openstack-octavia-worker-2.0.3-2.el7ost.noarch



Verification steps:

1. Deploy OSP 13 with octavia in a hybrid environment
  ยท Set in customized Jenkins job OVERCLOUD_DEPLOY_OVERRIDE_OPTIONS:

     --config-heat OctaviaTimeoutClientData=1200000 \
     --config-heat OctaviaTimeoutMemberData=1200000

2. Check the new timeouts values are reflected in configuration (in the controller):

    /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf:timeout_client_data=1200000
    /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf:timeout_member_connect=5000
    /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf:timeout_member_data=1200000
    /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf:timeout_tcp_inspect=0
    /var/lib/config-data/clustercheck/etc/puppet/hieradata/service_configs.json:    "octavia::controller::timeout_client_data": 1200000,
    /var/lib/config-data/clustercheck/etc/puppet/hieradata/service_configs.json:    "octavia::controller::timeout_member_connect": 5000,
    /var/lib/config-data/clustercheck/etc/puppet/hieradata/service_configs.json:    "octavia::controller::timeout_member_data": 1200000,
    /var/lib/config-data/clustercheck/etc/puppet/hieradata/service_configs.json:    "octavia::controller::timeout_tcp_inspect": 0,

3. Deploy OSP 3.11 with Kuryr and check the cluster is ready and all the pods are running:
Note: A workaround for [1] has been applied - "yum install openshift-ansible --enablerepo=rhelosp-rhel-7.6-server-opt"

[openshift@master-0 ~]$ oc version
oc v3.11.90
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://console.openshift.example.com:8443
openshift v3.11.90
kubernetes v1.11.0+d4cacc0


[openshift@master-0 ~]$ oc get nodes
NAME                                 STATUS    ROLES     AGE       VERSION
app-node-0.openshift.example.com     Ready     compute   2h        v1.11.0+d4cacc0
app-node-1.openshift.example.com     Ready     compute   2h        v1.11.0+d4cacc0
infra-node-0.openshift.example.com   Ready     infra     2h        v1.11.0+d4cacc0
master-0.openshift.example.com       Ready     master    2h        v1.11.0+d4cacc0


[openshift@master-0 ~]$ oc get pods --all-namespaces
NAMESPACE                   NAME                                                READY     STATUS    RESTARTS   AGE
default                     docker-registry-1-p6b2d                             1/1       Running   0          2h
default                     kuryr-pod-1958724533                                1/1       Running   0          1h
default                     registry-console-1-c7f79                            1/1       Running   0          2h
default                     router-1-vs7ld                                      1/1       Running   0          2h
kube-system                 master-api-master-0.openshift.example.com           1/1       Running   0          2h
kube-system                 master-controllers-master-0.openshift.example.com   1/1       Running   0          2h
kube-system                 master-etcd-master-0.openshift.example.com          1/1       Running   0          2h
kuryr-namespace-699852674   kuryr-pod-436421213                                 1/1       Running   0          1h
kuryr-namespace-920831344   kuryr-pod-826298564                                 1/1       Running   0          1h
openshift-console           console-5d5b6bd95d-m487h                            1/1       Running   0          2h
openshift-infra             kuryr-cni-ds-6w4hx                                  2/2       Running   0          1h
openshift-infra             kuryr-cni-ds-pdfx5                                  2/2       Running   0          1h
openshift-infra             kuryr-cni-ds-pkzxj                                  2/2       Running   0          1h
openshift-infra             kuryr-cni-ds-xpg25                                  2/2       Running   0          1h
openshift-infra             kuryr-controller-6c6d965c54-k68fn                   1/1       Running   0          1h
openshift-monitoring        alertmanager-main-0                                 3/3       Running   0          2h
openshift-monitoring        alertmanager-main-1                                 3/3       Running   0          2h
openshift-monitoring        alertmanager-main-2                                 3/3       Running   0          2h
openshift-monitoring        cluster-monitoring-operator-75c6b544dd-4swg4        1/1       Running   0          2h
openshift-monitoring        grafana-c7d5bc87c-qw92g                             2/2       Running   0          2h
openshift-monitoring        kube-state-metrics-6c64799586-gmjv5                 3/3       Running   0          2h
openshift-monitoring        node-exporter-57hdh                                 2/2       Running   0          2h
openshift-monitoring        node-exporter-mptl8                                 2/2       Running   0          2h
openshift-monitoring        node-exporter-qdqq5                                 2/2       Running   0          2h
openshift-monitoring        node-exporter-rqm49                                 2/2       Running   0          2h
openshift-monitoring        prometheus-k8s-0                                    4/4       Running   1          2h
openshift-monitoring        prometheus-k8s-1                                    4/4       Running   1          2h
openshift-monitoring        prometheus-operator-5b47ff445b-w5v4c                1/1       Running   0          2h
openshift-node              sync-fv2tv                                          1/1       Running   0          2h
openshift-node              sync-gklcl                                          1/1       Running   0          2h
openshift-node              sync-gr62d                                          1/1       Running   0          2h
openshift-node              sync-k6nn2                                          1/1       Running   0          2h



[1] https://bugzilla.redhat.com/show_bug.cgi?id=1684077

Comment 14 errata-xmlrpc 2019-03-14 13:33:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0567


Note You need to log in before you can comment on or make changes to this bug.