Bug 1887148 - [updates] 16.1, 0.08 packet loss on stop l3 agent connectivity check test
Summary: [updates] 16.1, 0.08 packet loss on stop l3 agent connectivity check test
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Sofer Athlan-Guyot
QA Contact: David Rosenfeld
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-11 09:36 UTC by Ronnie Rasouli
Modified: 2020-10-19 13:22 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-19 13:22:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ronnie Rasouli 2020-10-11 09:36:53 UTC
Description of problem:

The stop l3 agent connectivity check is failing with micro percentage of packet loss 0.00792142%

Version-Release number of selected component (if applicable):
HOS-16.1-RHEL-8-20200723.n.0
core_puddle: RHOS-16.1-RHEL-8-20201007.n.0

How reproducible:
most likely

Steps to Reproduce:
1. deploy osp16.1 GA
2. update the undercloud to latest version
3. update the overcloud
4. run l3 agent connectivity check

Actual results:
packet loss

Expected results:
0 percent of packet loss

Additional info:

Comment 2 Sofer Athlan-Guyot 2020-10-12 13:39:18 UTC
Hi,

so the packet loss is actual 1 sent ping got lost.  As the #29 jobs has the same puddle than job #28 and job #28 was successful, I think this is just a random lost ping.

If that kind of error happen too often, we may then add something in tripleo-upgrade in order to not get this kind of false positive: maybe a test on the ping count, if < 5 then not fail.

Lowering the priority and severity, waiting until next run, if we get that 1 ping loss again before moving on tripleo-upgrade patch.

Comment 3 Jiri Stransky 2020-10-19 13:22:16 UTC
Does not seem to reproduce.


Note You need to log in before you can comment on or make changes to this bug.