Bug 1673438 - [RFE]devices.kubevirt.io: tun/KVM/vhost-net on the node need to configurable
Summary: [RFE]devices.kubevirt.io: tun/KVM/vhost-net on the node need to configurable
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: future
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.0
Assignee: sgott
QA Contact: guy chen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-07 14:20 UTC by Israel Pinto
Modified: 2019-10-22 12:33 UTC (History)
9 users (show)

Fixed In Version: virt-handler-container-v2.0.0-6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-22 12:33:54 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 2014 0 None closed Fix max devices for virt-handler device plugins 2020-11-18 13:10:51 UTC
Github kubevirt user-guide pull 218 0 None closed Add instructions regarding virt-handler device plugins 2020-11-18 13:10:29 UTC

Description Israel Pinto 2019-02-07 14:20:03 UTC
Description of problem:
In order to scale up node we need have ability to configure  
devices.kubevirt.io: tun/KVM/vhost-net.
Now it hard coded

Comment 1 Fabian Deutsch 2019-02-07 14:31:36 UTC
Target release primarily depends on when we want to test further scale up.

Israel, one thing to test (as a workaround) is actually: Try to manually edit the allocatable resources, and hope that it's not getting overwritten again.

Steps:
1. Start handler on a node
2.a. Use oc edit node $THENODE
2.b. Change the allocatbale resource of kvm device in the yaml using
3. Check with oc describe node $THENODE if the value is still the increased value

If this workaround does not work, then we need to implement an override mechanism for the value.

Comment 2 sgott 2019-02-07 20:43:52 UTC
This is a bona fide bug that we need to fix. It turns out we do implement an override mechanism, it's just broken.

Comment 3 guy chen 2019-02-10 10:22:37 UTC
I try the workaround, unfortunately it does not work, value get overwritten.

Comment 4 Fabian Deutsch 2019-02-11 08:43:58 UTC
Fair enough, I somewhat expected this.

Comment 5 Fabian Deutsch 2019-02-11 10:19:29 UTC
Stu, can you please describe the steps how to use the fix in the PR to change the number of max devices?

Comment 6 sgott 2019-02-18 13:40:37 UTC
The PR has been merged into upstream master, I'll follow up to make sure this flag is documented upstream but it wouldn't hurt to mention how to do it here too.

In the virt-handler.yaml manifest, the commmand to be run is a list of strings along the lines of this:

      - command:
        - virt-handler
        - --port
        - "8443"
        - --hostname-override
        - $(NODE_NAME)
        - --pod-ip-address
        - $(MY_POD_IP)
        - -v
        - 3

Simply add these lines to that list:

        - --max-devices
        - $(MAX_DEVICES)

Where $(MAX_DEVICES) is a placeholder for the numder of devices desired. The default is 110.

Comment 7 Fabian Deutsch 2019-02-20 12:59:57 UTC
Stu, can you please also provide this information to the user-guide?

Comment 8 sgott 2019-03-12 18:17:43 UTC
Steps to verify:

$ kubectl -n kubevirt edit ds virt-handler

add "--max-devices XXX" to the list of commands as in comment #6
NOTE: make sure the number added is a string (these are command line arguments)
Once the new manifest is saved, there should be no errors on the command line.

$ kubectl -n kubevirt get pod
verify each virt-handler pod has been restarted (a shorter uptime than the rest of the pods)

$ kubectl get nodes -o yaml
verify the number of devices specifed in the first step is now available.

Comment 9 Israel Pinto 2019-03-26 12:36:37 UTC
We check it on 0.15 - works

Comment 10 guy chen 2019-04-03 08:47:59 UTC
virt-handler pod are not restarted automatically - only when i delete them then they come up with the new configuration.

Comment 11 David Vossel 2019-04-03 13:21:20 UTC
> virt-handler pod are not restarted automatically - only when i delete them then they come up with the new configuration.

if the daemonset has

```
  updateStrategy:
    type: RollingUpdate

```

set in the spec, then i'd expect the changes to roll out automatically.

Comment 12 David Vossel 2019-04-03 13:29:07 UTC
> Steps to verify:
> $ kubectl -n kubevirt edit ds virt-handler
> add "--max-devices XXX" to the list of commands as in comment #6

I just want to point out that this solution isn't reliable moving forward.  The cluster-admin (or anything other than the virt-operator) does not own the virt-handler daemonset.  Now that we have virt-operator managing the rollout and deployment of the KubeVirt infrastructure, there's no guarantee any modification we make directly to k8s objects created by virt-operator will stick. Virt-operator will eventually re-converge on the daemonset and force it back to the values virt-operator expects. 

basically, we can't manually edit any k8s objects created by virt-operator. It will look like it's working today, but that will go away in unexpected ways as future releases are used/tested. Instead we have to either use the kubevirt-config configmap, or add an option to the KubeVirt crd.

To avoid this entirely for kvm/tun/vhost-net, can we not just set it to something absurdly high like 1028? We're going to be throttled by the pod limit anyway.

Comment 13 sgott 2019-04-03 13:34:09 UTC
The reason it was set to the (default) pod limit in the first place is that each device allocated represents an element in a list. a few thousand shouldn't be an issue, but it shouldn't be open-ended.

Comment 14 sgott 2019-04-03 14:32:33 UTC
As David pointed out, the daemonset not updating automatically is expected behavior (sorry I misled in Comment #8). Per Comment #9, Is there anything else needed before moving this issue to VERIFIED?

Comment 15 Israel Pinto 2019-04-22 13:00:16 UTC
Can we open RFE for the updating automatically ?
This RFE is on enable to update the devices which is working.
Verifying it

Comment 16 sgott 2019-04-22 14:30:53 UTC
As pointed out in Comment #12, virt-operator will be in charge of the state of all KubeVirt components going forward. While changing the runtime state of the virt-handler daemonset was enough to help you with testing so far, it will not and cannot maintain hand-jammed settings in a long-running cluster in future releases.

The RFE you really want is a way to manage the number of devices via the operator, but that might be a dead-end in the long run. There's an open question as to why we'd use the device plugin framework going forward -- it's designed for devices that can't be shared, e.g. GPU. I think our energy is much better spent on addressing the root cause of the frustration here. Exposing knobs so the cluster manager can fiddle with artificial numbers is really just a band aid on a workaround.

A much shorter-term question is revisit the default number of tun/kvm/vhost-net devices and determine if raising that value is reasonable.


Note You need to log in before you can comment on or make changes to this bug.