Bug 1872471 - [sig-network-edge] Cluster frontend ingress remain available
Summary: [sig-network-edge] Cluster frontend ingress remain available
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Routing
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: Miciah Dashiel Butler Masters
QA Contact: Hongan Li
URL:
Whiteboard:
Depends On:
Blocks: 1879961
TreeView+ depends on / blocked
 
Reported: 2020-08-25 20:45 UTC by David Eads
Modified: 2020-10-02 15:48 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1879961 (view as bug list)
Environment:
[sig-network-edge] Cluster frontend ingress remain available
Last Closed: 2020-10-02 15:48:03 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description David Eads 2020-08-25 20:45:43 UTC
test:
[sig-network-edge] Cluster frontend ingress remain available 

is failing frequently in CI, see search results:
https://search.ci.openshift.org/?maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job&search=%5C%5Bsig-network-edge%5C%5D+Cluster+frontend+ingress+remain+available

This is failing on 20% of upgrade runs without outages of over a minute on gcp: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade-4.6/1298319599309164544

Comment 1 Andrew McDermott 2020-09-10 11:54:04 UTC
I’m adding UpcomingSprint, because I was occupied by fixing bugs with
higher priority/severity, developing new features with higher
priority, or developing new features to improve stability at a macro
level. I will revisit this bug next sprint.

Comment 2 mfisher 2020-09-15 16:17:49 UTC
Target set to 4.7 while investigation is either ongoing or pending.  Will be considered for earlier release versions when diagnosed and resolved.
May be related to Jira NE-348 which is targeted for OCP 4.7 release.

Comment 3 Jack Ottofaro 2020-09-16 13:54:15 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
  example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time
What is the impact?  Is it serious enough to warrant blocking edges?
  example: Up to 2 minute disruption in edge routing
  example: Up to 90seconds of API downtime
  example: etcd loses quorum and you have to restore from backup
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  example: Issue resolves itself after five minutes
  example: Admin uses oc to fix things
  example: Admin must SSH to hosts, restore from backups, or other non standard admin activities
Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
  example: No, it’s always been like this we just never noticed
  example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 4 Devan Goodwin 2020-09-17 12:18:08 UTC
Still showing as a top failure for build watchers, pass rate 25% as of today.

Comment 5 Andrew McDermott 2020-10-02 15:48:03 UTC
Closing as we will track the work in https://issues.redhat.com/browse/NE-348 which scheduled for 4.7.


Note You need to log in before you can comment on or make changes to this bug.