Bug 1952509 - The pod disruption budget is below the minimum number allowed pods.
Summary: The pod disruption budget is below the minimum number allowed pods.
Keywords:
Status: CLOSED DUPLICATE of bug 1873555
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 2.6.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: sgott
QA Contact: Israel Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-22 12:38 UTC by Patrik Martinsson
Modified: 2021-04-26 10:19 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-26 08:03:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Patrik Martinsson 2021-04-22 12:38:50 UTC
Description of problem:

The alert "The pod disruption budget is below the minimum number allowed pods." is triggerred for every namespace that sets up a virtual machine.

Version-Release number of selected component (if applicable):

OpenShift version                  : 4.6.7
kubevirt-hyperconverged-operator   : 2.6.1


How reproducible:

Always. 


Steps to Reproduce:
1. Configure an OpenShift cluster, version 4.6.7
2. Enable the virtualization operator
3. Create "HyperConverged"
4. Create VM.

Actual results:

VM is created successfully. 
Alert is firing, "The pod disruption budget is below the minimum number allowed pods."

# From my namespace, 
$ > oc get pdb
NAME                               MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
kubevirt-disruption-budget-d2n2z   2               N/A               0                     2d2h

$ > oc describe pdb
Name:           kubevirt-disruption-budget-d2n2z
Namespace:      xx
Min available:  2
Selector:       kubevirt.io/created-by=79ad5838-4130-473e-a057-5de3fdda9611
Status:
    Allowed disruptions:  0
    Current:              1
    Desired:              2
    Total:                1
Events:                   <none>

Expected results:

VM is created successfully. 
No alerts should be firing. 


Additional info:

We can ofcourse silence the alert, eg. like suggested here https://access.redhat.com/solutions/5101781
But that seems like a workaround that shouldn't be necessary, please correct me if I'm wrong.

Best regards, 
Patrik,
Sweden

Comment 1 Patrik Martinsson 2021-04-23 06:53:38 UTC
I'm not sure which component this belongs to, I would put it under "Container Native Virtualization (CNV)", but that one doesn't seem to exist anymore, but I guess the team that handles the Virtualization operator would be a good fit.

Best regards, 
Patrik, 
Sweden

Comment 3 Dan Kenigsberg 2021-04-26 06:10:18 UTC
(In reply to Patrik Martinsson from comment #1)
> I'm not sure which component this belongs to, I would put it under
> "Container Native Virtualization (CNV)", but that one doesn't seem to exist
> anymore

It's a bit confusing, but CNV is a Product in Bugzilla Moving the bug there.

Can you share how you've installed CNV-2.6.1 on OCP-4.6? CNV-2.6 is expected to run on OCP-4.7 and during upgrade on OCP-4.8. On my cluster I see a few
$ oc get pdb -A |grep kubevirt|wc -l
3
but certainly not on all namespaces running VMs.
$ oc get vmi -A --no-headers | awk '{print $1}'|sort|uniq |wc -l
42

Would you be able to upgrade OCP to 4.7? If the problem remains, can you attach must-gather https://docs.openshift.com/container-platform/4.7/virt/logging_events_monitoring/virt-collecting-virt-data.html#gathering-data-specific-features_virt-collecting-virt-data ?

If you are a Red Hat customer, you should probably file a support case at https://access.redhat.com/ and attach this bug to it.

Comment 4 Patrik Martinsson 2021-04-26 07:25:26 UTC
Hi Dan, 

Thanks for answering. 

>>Can you share how you've installed CNV-2.6.1 on OCP-4.6? 

4.6.7 was a typo, I meant 4.7.6, but yes the installed version of OpenShift Virtualization is 2.6.1 (I installed as usual, through the gui, no problem). 

$ > oc get pdb -A |grep kubevirt|wc -l
3

$ > oc get vmi -A --no-headers
openshift-virtualization-os-images   win2k16-functional-lion      3d15h   Running   10.144.2.180   tt16.xx.xx
openshift-virtualization-os-images   win2k16-protective-swallow   4d21h   Running   10.144.2.65    tt16.xx.xx

So yes, it seems to work as expected, *except* that we get alerts for "The pod disruption budget is below the minimum number allowed pods.".

We are a RH customer, but I usually feel that I get better response from Bugzilla's, hence I reported it here. 

I just don't understand why this alarm is firing, and what is it suppose to do, 

$ > oc get pdb kubevirt-disruption-budget-d2n2z -o yaml
apiVersion: v1
items:
- apiVersion: policy/v1beta1
  kind: PodDisruptionBudget
  metadata:
    creationTimestamp: "2021-04-20T09:56:10Z"
    generateName: kubevirt-disruption-budget-
    generation: 1
    managedFields:
    name: kubevirt-disruption-budget-d2n2z
    namespace: user-manpi
    ownerReferences:
    - apiVersion: kubevirt.io/v1alpha3
      blockOwnerDeletion: true
      controller: true
      kind: VirtualMachineInstance
      name: vm-manpi-rhel8-3
      uid: 79ad5838-4130-473e-a057-5de3fdda9611
    resourceVersion: "28026566"
  spec:
    minAvailable: 2
    selector:
      matchLabels:
        kubevirt.io/created-by: 79ad5838-4130-473e-a057-5de3fdda9611
  status:
    currentHealthy: 1
    desiredHealthy: 2
    disruptionsAllowed: 0
    expectedPods: 1
    observedGeneration: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Any idea about this, or how I can debug this further ? 

Best, 
Patrik,
Sweden

Comment 5 Shaul Garbourg 2021-04-26 07:55:52 UTC
looks like duplicate to BZ 1873555

Comment 6 Patrik Martinsson 2021-04-26 08:02:41 UTC
Pretty sure this is a duplicate, I'll close this. 

Thanks for pointing me in the right direction.

Best,
Patrik,
Sweden

Comment 7 Patrik Martinsson 2021-04-26 08:03:03 UTC

*** This bug has been marked as a duplicate of bug 1873555 ***

Comment 8 Dan Kenigsberg 2021-04-26 09:54:48 UTC
> We are a RH customer, but I usually feel that I get better response from Bugzilla's, hence I reported it here. 

What can I say but "the customer is always right". It would be helpful to me if you do file a case and attach it to that other bug. It would help us prioritize the fix and provide better service in general. Regardless, we are interested to hear about how you use OpenShift Virtualization. Please share it with us there.

Comment 9 Patrik Martinsson 2021-04-26 10:19:16 UTC
(In reply to Dan Kenigsberg from comment #8)
> > We are a RH customer, but I usually feel that I get better response from Bugzilla's, hence I reported it here. 
> 
> What can I say but "the customer is always right". It would be helpful to me
> if you do file a case and attach it to that other bug. It would help us
> prioritize the fix and provide better service in general. 

Sure thing, reported it as a support case as well, https://access.redhat.com/support/cases/#/case/02926303

> Regardless, we are
> interested to hear about how you use OpenShift Virtualization. Please share
> it with us there.

Our business simply use the virtualization feature to allow us to migrate older workloads based on .NET Framework X, which we can't rewrite to utilizae .NET Core.
So the simplest solution for us is to migrate those VM's (actually create new VM's and just migrate the workload), and take it from there. 
Doing so we can hopefully take advantage of the other features OCP4x deliveres, such as workflows, cicd, etc. 

Hope that answers your question somewhat atleast.

Best regards,
Patrik, 
Sweden


Note You need to log in before you can comment on or make changes to this bug.