Bug 1858958 - Removing a one master node degrades cluster
Summary: Removing a one master node degrades cluster
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 4.5
Hardware: s390x
OS: Linux
low
low
Target Milestone: ---
: 4.6.0
Assignee: Silke Niemann
QA Contact: Xiaoli Tian
Vikram Goyal
URL:
Whiteboard:
Depends On:
Blocks: ocp-42-45-z-tracker
TreeView+ depends on / blocked
 
Reported: 2020-07-20 21:23 UTC by Philip Chan
Modified: 2023-09-15 00:34 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-30 14:09:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cluster-monitoring-operator.log (131.24 KB, text/plain)
2020-07-24 16:36 UTC, Philip Chan
no flags Details
cluster-monitoring-operator-kube-rbac-proxy.log (279 bytes, text/plain)
2020-07-24 16:36 UTC, Philip Chan
no flags Details
machine-config-operator.log (2.58 KB, text/plain)
2020-07-24 16:40 UTC, Philip Chan
no flags Details
logs for comment 14 (7.41 KB, application/gzip)
2020-07-27 21:48 UTC, Philip Chan
no flags Details

Description Philip Chan 2020-07-20 21:23:49 UTC
Description of problem:
I have three master nodes in my cluster. Stopping a master node degrades the cluster. Two cluster operators(machine-config and monitor) go into 'False' under AVAILABLE.

Version-Release number of selected component (if applicable):
4.5.0-0.nightly-s390x-2020-07-17-091817

How reproducible: Consistently 


Steps to Reproduce:
1. Have a minimum cluster configured and running (3 masters and 2 workers).
2. Stop one of the master nodes.
3. Cluster degrades and will report this:
# oc get clusterversion
NAME      VERSION                                   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-s390x-2020-07-17-091817   True        False         2d19h   Error while reconciling 4.5.0-0.nightly-s390x-2020-07-17-091817: an unknown error has occurred: MultipleErrors

Actual results: Two cluster operators machine-config and monitor are unavailable.


Expected results: The cluster should remain unaffected and failover to the remaining two master nodes after the loss of one.


Additional info:
I am still able to logon to the dashboard, and the status also shows the following -

"Cluster operator network has not been available for 10 mins. Operator may be down or disabled, cluster will not be kept up to date and upgrades will not be possible."

Comment 1 Carvel Baus 2020-07-21 13:23:54 UTC
How long did you allow for cluster to attempt to recover (move resources to other nodes) or does it remain in the degraded state permanently?

Comment 2 Philip Chan 2020-07-21 14:39:36 UTC
Hi Carvel,

We tested this on two different clusters, both running the 4.5.0-0.nightly-s390x-2020-07-17-091817 build.  Under the first cluster, we observed the state remained degraded after 15 minutes. On a second cluster, I monitored this to be degraded for over an hour.  

I wanted to also mention that I just tested this on OCP 4.4.9(GA) to compare releases.  A similar behavior occurs there as well, except a total of three operators remained degraded for over 30min+.  The operators on 4.4.9 that remain unavailable are machine-config, marketplace and monitoring.

Comment 3 krmoser 2020-07-21 16:51:26 UTC
Carvel,

Just to add to the information Phil has provided, we've also tested this on the 4.4.0-0.nightly-s390x-2020-07-18-042020 and 4.5.0-0.nightly-s390x-2020-07-17-173217 build with the same or similar results.  

For both of these 4.4 and 4.5 nightly builds:
1. We powered off each of the 3 master nodes, 1 at a time, and each remained off for over an hour (sometimes 2-3 hours).

2. After any of these 3 master nodes was powered off, within 2-8 minutes, 1-3 cluster operators' AVAILABLE state would become False, as reported by the "oc get co" command.

3. After powering on the master node that had been previously powered off, per the "oc get co" command all cluster operators returned to AVAILABLE STATUS of True, with all PROGRESSING False, and all DEGRADED as False usually within 1-2 minutes.



As an example using the 4.4.0-0.nightly-s390x-2020-07-18-042020 build this issue manifested as follows:
 1. 20:42: power off master-1 node, within 20-30 seconds the "oc get nodes" command reported the STATUS for his node as "NotReady".  

 2. 20:50: the "oc get co" command first starts to report any of the cluster operators with AVAILABLE as False, Progressing as True, and Degraded as True
    20:50: the monitoring cluster operator's AVAILABLE state becomes False

 3. 20:56: the machine-config cluster operator's AVAILABLE state becomes False 

 4. 20:58: the console cluster operator's AVAILABLE state becomes False

 5. 21:00: the console cluster operator's AVAILABLE state becomes True
    21:00: "oc get clusterversion" command reports "Error while reconciling 4.4.0-0.nightly-s390x-2020-07-18-042020: an unknown error has occurred"
    21:00: the operator-lifecycle-manager-packageserver cluster operator's AVAILABLE state becomes False
 
 6. 21:01: the operator-lifecycle-manager-packageserver cluster operator's AVAILABLE state becomes True

 7. 23:06: power on master-1 node

 8. 23:07: "oc get clusterversion" command reports "Cluster version is 4.4.0-0.nightly-s390x-2020-07-18-042020"
    23:07: "oc get nodes"          command reports master-1 STATUS becomes "Ready"
    23:07: "oc get co"             command reports all cluster operators' AVAILABLE status are True, including monitoring and machine-config



 9. 23:10: power off master-0 node 
    23:10: the operator-lifecycle-manager-packageserver cluster operator's AVAILABLE state becomes False

10. 23:12: the openshift-apiserver cluster operator's AVAILABLE state becomes False

11. 23:14: the master-0 node first reports STATUS NotReady (per the "oc get nodes" command) - 4-5 minutes after actual power off
    23:14: the openshift-apiserver cluster operator's AVAILABLE state becomes True

12. 23:17: "oc get clusterversion" command reports " Error while reconciling 4.4.0-0.nightly-s390x-2020-07-18-042020: the cluster operator etcd is degraded"

13. 23:18: the console cluster operator's AVAILABLE state becomes False

14. 23:22: the monitoring cluster operator's AVAILABLE state becomes False 

15. 23:26: "oc get clusterversion" command reports "Error while reconciling 4.4.0-0.nightly-s390x-2020-07-18-042020: an unknown error has occurred"

16. 23:30: the console cluster operator's AVAILABLE state becomes True
    23:30: the operator-lifecycle-manager-packageserver cluster operator's AVAILABLE state becomes True

17. 23:42: the machine-config cluster operator's AVAILABLE state becomes False 

18. 23:55: power on master-0 node 

19. 23:56: "oc get clusterversion" command reports "Cluster version is 4.4.0-0.nightly-s390x-2020-07-18-042020"
    23:56: "oc get nodes"          command reports master-0 STATUS becomes "Ready"
    23:56: "oc get co"             command reports all cluster operators' AVAILABLE status are True, including monitoring and machine-config



Thank you,
Kyle

Comment 4 Philip Chan 2020-07-23 03:15:02 UTC
Hi Carvel,

I wanted to check on the state of the bug/problem.  If you require more details or logs, please let us know.  We strongly believe this needs to be resolved prior to v4.5 GA.

Thank You,
Phil

Comment 5 Dennis Gilmore 2020-07-23 13:42:14 UTC
Moving over to Machine Config Operator for greater visibility and input.

Comment 6 krmoser 2020-07-23 13:49:49 UTC
Carvel,

Phil and I see the same machine-config and monitor cluster operators degradation issue with the latest 4.5 nightly build, 4.5.0-0.nightly-s390x-2020-07-23-005423.

Thank you,
Kyle

Comment 7 Carvel Baus 2020-07-23 14:07:01 UTC
Can you provide the logs (output) from the following commands?

$ oc logs -n openshift-machine-config-operator deployment/machine-config-operator


$ oc logs -n openshift-monitoring deployments/cluster-monitoring-operator

Comment 8 Steve Milner 2020-07-23 14:30:17 UTC
Philip,

Thank you for the report. The minimum amount of control plane nodes for a working cluster 3 so this does sound like it's working as it should. If there are less than 3 then the cluster should be degraded until there are 3 control plane nodes once more.

```
The smallest OpenShift Container Platform clusters require the following hosts:

   * One temporary bootstrap machine
   * Three control plane, or master, machines
   * At least two compute machines, which are also known as worker machines
```

Reference: https://docs.openshift.com/container-platform/4.5/installing/installing_bare_metal/installing-bare-metal.html#machine-requirements_installing-bare-metal

I'm going to drop the severity and priority on this as this is working as designed and bring this up with the docs team to see if there may be better locations to echo the control plane requirements.

Comment 9 Steve Milner 2020-07-24 15:50:08 UTC
Chris Negus spoke with me today and noted this bug can move over to the Documentation group as it may make sense to have a higher level requirements section/page rather than requirements only being noted in install types.

Comment 10 Philip Chan 2020-07-24 16:35:41 UTC
Hi,

We installed a new cluster with 5 Master nodes using the 4.5.0-0.nightly-s390x-2020-07-24-085757 build.  All operators and cluster status shows Available.

[root@ospbmgr4 ~]# oc get clusterversion
NAME      VERSION                                   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         4h9m    Cluster version is 4.5.0-0.nightly-s390x-2020-07-24-085757

[root@ospbmgr4 ~]# oc get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
master-0.pok-96-nightly.pok.stglabs.ibm.com   Ready    master   4h32m   v1.18.3+8b0a82f
master-1.pok-96-nightly.pok.stglabs.ibm.com   Ready    master   4h32m   v1.18.3+8b0a82f
master-2.pok-96-nightly.pok.stglabs.ibm.com   Ready    master   4h32m   v1.18.3+8b0a82f
master-3.pok-96-nightly.pok.stglabs.ibm.com   Ready    master   4h32m   v1.18.3+8b0a82f
master-4.pok-96-nightly.pok.stglabs.ibm.com   Ready    master   4h32m   v1.18.3+8b0a82f
worker-0.pok-96-nightly.pok.stglabs.ibm.com   Ready    worker   4h20m   v1.18.3+8b0a82f
worker-1.pok-96-nightly.pok.stglabs.ibm.com   Ready    worker   4h18m   v1.18.3+8b0a82f

[root@ospbmgr4 ~]# oc get co
NAME                                       VERSION                                   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h9m
cloud-credential                           4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h32m
cluster-autoscaler                         4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h23m
config-operator                            4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h23m
console                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h16m
csi-snapshot-controller                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h19m
dns                                        4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h30m
etcd                                       4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h29m
image-registry                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h24m
ingress                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h19m
insights                                   4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h24m
kube-apiserver                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h28m
kube-controller-manager                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h29m
kube-scheduler                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h28m
kube-storage-version-migrator              4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h19m
machine-api                                4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h24m
machine-approver                           4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h27m
machine-config                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      26m
marketplace                                4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      25m
monitoring                                 4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      26m
network                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h31m
node-tuning                                4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h31m
openshift-apiserver                        4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      133m
openshift-controller-manager               4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h24m
openshift-samples                          4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h23m
operator-lifecycle-manager                 4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h30m
operator-lifecycle-manager-catalog         4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h30m
operator-lifecycle-manager-packageserver   4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      3h59m
service-ca                                 4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h31m
storage                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h24m

I powered off master-0 at 11:50AM. The node status immediately shows it is NotReady:

[root@ospbmgr4 ~]# oc get nodes
NAME                                          STATUS     ROLES    AGE     VERSION
master-0.pok-96-nightly.pok.stglabs.ibm.com   NotReady   master   4h35m   v1.18.3+8b0a82f
master-1.pok-96-nightly.pok.stglabs.ibm.com   Ready      master   4h35m   v1.18.3+8b0a82f
master-2.pok-96-nightly.pok.stglabs.ibm.com   Ready      master   4h36m   v1.18.3+8b0a82f
master-3.pok-96-nightly.pok.stglabs.ibm.com   Ready      master   4h36m   v1.18.3+8b0a82f
master-4.pok-96-nightly.pok.stglabs.ibm.com   Ready      master   4h35m   v1.18.3+8b0a82f
worker-0.pok-96-nightly.pok.stglabs.ibm.com   Ready      worker   4h24m   v1.18.3+8b0a82f
worker-1.pok-96-nightly.pok.stglabs.ibm.com   Ready      worker   4h22m   v1.18.3+8b0a82f

Approximately 30 minutes pass by and these are the current states of all cluster operators:

[root@ospbmgr4 ~]# oc get co
NAME                                       VERSION                                   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h41m
cloud-credential                           4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      5h4m
cluster-autoscaler                         4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h55m
config-operator                            4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h55m
console                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      26m
csi-snapshot-controller                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h51m
dns                                        4.5.0-0.nightly-s390x-2020-07-24-085757   True        True          False      5h2m
etcd                                       4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         True       5h1m
image-registry                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h56m
ingress                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h51m
insights                                   4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h56m
kube-apiserver                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         True       5h
kube-controller-manager                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         True       5h
kube-scheduler                             4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         True       5h
kube-storage-version-migrator              4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h51m
machine-api                                4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h56m
machine-approver                           4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h59m
machine-config                             4.5.0-0.nightly-s390x-2020-07-24-085757   False       False         True       18m
marketplace                                4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      57m
monitoring                                 4.5.0-0.nightly-s390x-2020-07-24-085757   False       True          True       27m
network                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        True          True       5h3m
node-tuning                                4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      5h3m
openshift-apiserver                        4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         True       28m
openshift-controller-manager               4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h56m
openshift-samples                          4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h55m
operator-lifecycle-manager                 4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      5h2m
operator-lifecycle-manager-catalog         4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      5h2m
operator-lifecycle-manager-packageserver   4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      28m
service-ca                                 4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      5h3m
storage                                    4.5.0-0.nightly-s390x-2020-07-24-085757   True        False         False      4h56m

There are a few operators that are degraded, but there are the two that remain Unavailable -- machine-config and monitoring.  I'm attaching the logs taken from both at this time.

I also tried this same scenario under a different cluster with 5 Master nodes, and the same behavior is observed.  The build for that cluster is 4.5.0-0.nightly-s390x-2020-07-23-005423.

If you need any additional logs, please let me know.

Thank You,
-Phil

Comment 11 Philip Chan 2020-07-24 16:36:17 UTC
Created attachment 1702362 [details]
cluster-monitoring-operator.log

Comment 12 Philip Chan 2020-07-24 16:36:53 UTC
Created attachment 1702363 [details]
cluster-monitoring-operator-kube-rbac-proxy.log

Comment 13 Philip Chan 2020-07-24 16:40:17 UTC
Created attachment 1702364 [details]
machine-config-operator.log

Comment 14 Yu Qi Zhang 2020-07-24 18:47:50 UTC
Hey Phillip, I can help look into the MCO related items. If possible, a must-gather is best for us to determine the exact state of your new cluster, since that allows us to see more logs.

If the full must-gather is unavailable, could you please find the following:
 1. pods under the openshift-machine-config-operator namespace: 
    a. 8 machine-config-daemons (oc logs -n openshift-machine-config-operator -c machine-config-daemon $pod_name)
    b. 1 machine-config-controller (oc logs -n openshift-machine-config-operator $pod_name)
 2. the status of machine-config-pools (oc describe mcp/master | oc describe mcp/worker)
 3. the status of the operator (oc describe co/machine-config)

So I'm going to guess what's happening is this:
Your cluster is still functional at the moment, but it is set to have 5 control-plane nodes. When you lost one (powered off) without actually explicitly removing the node, the cluster thinks something has gone wrong and the machine-config-operator (among others) think that the cluster is in a degraded state. The MCO probably is seeing a node missing (4 available, 5 required) which is a degraded state.

Note also that the MCO doesn't distinguish very well between available, degraded and not-available, degraded. So the MCO should still be operational/available in this state and we're just reporting it poorly. (I'm not 100% sure about this and would need the must-gather to know for sure).

To "fix" this issue you'd need to either
 1. get the node back into the cluster
 2. add a new node to fulfil the 5 control-plane setup: https://docs.openshift.com/container-platform/4.5/backup_and_restore/replacing-unhealthy-etcd-member.html
 3. remove a control-plane node entirely

The MCO shouldn't require external intervention when the node is back up.

As for the status reporting, we have an upstream issue on this and will work to improve it: https://github.com/openshift/machine-config-operator/issues/1746

Comment 15 Yu Qi Zhang 2020-07-24 18:54:05 UTC
And to summarize a bit, I think "losing a master via losing power" should be considered a degraded state for the cluster. "losing a master when I explicitly remove it" should not degrade the cluster as long as remaining control-plane nodes >= 3.

Comment 16 Philip Chan 2020-07-27 21:48:28 UTC
Hi Yu,

Yes, that is correct -- the cluster is operational with the loss of the 1 control-plane node. We go from 5 to 4 available masters.  We explicitly tested the availability and status of the cluster by shutting down the zVM guest hosting one of the Master nodes.  I will attach the logs you requested when performing these steps after posting this update.  Please note that one of the pods(machine-config-daemon-h5mhf) did not succeed when running oc logs in the machine-config-daemons. It kept report this while the others succeeded:

# oc logs -n openshift-machine-config-operator -c machine-config-daemon machine-config-daemon-h5mhf > machine-config-daemon-h5mhf.log
Error from server: Get https://10.20.116.155:10250/containerLogs/openshift-machine-config-operator/machine-config-daemon-h5mhf/machine-config-daemon: dial tcp 10.20.116.155:10250: connect: no route to host

Also, please note this cluster is now running the latest nightly build 4.5.0-0.nightly-s390x-2020-07-25-031236.

The "fix" you refer to -- we did bring back the master that we took down to see what if the cluster properly recovers.  The cluster member does come back up and the cluster operators do recover.  The node and operators all show Available status with no degradation, absolutely no intervention is required.  So that is the good news.

I believe where we are getting confused and/or mislead are the varying combinations of what is meant by Available, Not Available and Degraded for the MCO.  If we see True for Available, and False for Degraded, then the cluster is "Good".  If we see False for Available, and True for Degraded, then I would think the cluster is "Bad".  For the github issue that is handling these concerns, if we can have a clearer delineation of degradation versus really functional degraded, then that would be best in reporting the status.  Currently, I do have the minimum number of master nodes operational and up at (4), while it is not the (5) that I configured it to be at; I feel that a better message or state needs to be reported.  e.g. "not optimal" versus "degraded"

Thank you,
-Phil Chan

Comment 17 Philip Chan 2020-07-27 21:48:58 UTC
Created attachment 1702587 [details]
logs for comment 14

Comment 18 Yu Qi Zhang 2020-07-29 21:50:17 UTC
Hi Phillip,

I took a look at the logs and they seem fine. Good to hear that the cluster can recover. We will work on the status reporting on the MCO side. (specifically, report available when the MCO is still operational)

By definition from the openshift API, https://github.com/openshift/api/blob/1de8998c03576489bc63be894395bc9b9e1b757b/config/v1/types_cluster_operator.go#L142-L168

Degraded means that the operator is not in its desired state. For the MCO it is trying to run 5 machine-config-daemons on the masters, and since one node is unavailable, one machine-config-daemon pod isn't running and thus "degraded". I think in this case, the MCO should be "available, degraded" by that definition. I agree the terminology is slightly confusing. The openshift definition for "degraded" really just means "not optimal" as far as I understand it.

We'll keep the upstream issue as a tracker for the MCO, and the docs team will also help in making this better documented. Thanks!

Comment 20 Dan Li 2021-02-03 15:16:44 UTC
@sniemann @stephanie Stout tagging you so that you are aware of the s390x documentation bug. Is this something that the Multi-Arch documentation team could take on?

Comment 21 Silke Niemann 2021-02-08 14:54:43 UTC
sstout I got feedback from IBM Z team. This is a general OpenShift issue and not Multi-Arch.

Comment 22 Dan Li 2021-06-30 14:09:45 UTC
Multi-Arch team is closing this bug as 4.8 is GA'ing and 4.5 will be approaching end of life in 2 weeks. 

Christian Lapolt will follow up Phil

Comment 23 Red Hat Bugzilla 2023-09-15 00:34:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.