Hide Forgot
Description of problem: In order to scale up node we need have ability to configure devices.kubevirt.io: tun/KVM/vhost-net. Now it hard coded
Target release primarily depends on when we want to test further scale up. Israel, one thing to test (as a workaround) is actually: Try to manually edit the allocatable resources, and hope that it's not getting overwritten again. Steps: 1. Start handler on a node 2.a. Use oc edit node $THENODE 2.b. Change the allocatbale resource of kvm device in the yaml using 3. Check with oc describe node $THENODE if the value is still the increased value If this workaround does not work, then we need to implement an override mechanism for the value.
This is a bona fide bug that we need to fix. It turns out we do implement an override mechanism, it's just broken.
I try the workaround, unfortunately it does not work, value get overwritten.
Fair enough, I somewhat expected this.
Stu, can you please describe the steps how to use the fix in the PR to change the number of max devices?
The PR has been merged into upstream master, I'll follow up to make sure this flag is documented upstream but it wouldn't hurt to mention how to do it here too. In the virt-handler.yaml manifest, the commmand to be run is a list of strings along the lines of this: - command: - virt-handler - --port - "8443" - --hostname-override - $(NODE_NAME) - --pod-ip-address - $(MY_POD_IP) - -v - 3 Simply add these lines to that list: - --max-devices - $(MAX_DEVICES) Where $(MAX_DEVICES) is a placeholder for the numder of devices desired. The default is 110.
Stu, can you please also provide this information to the user-guide?
Steps to verify: $ kubectl -n kubevirt edit ds virt-handler add "--max-devices XXX" to the list of commands as in comment #6 NOTE: make sure the number added is a string (these are command line arguments) Once the new manifest is saved, there should be no errors on the command line. $ kubectl -n kubevirt get pod verify each virt-handler pod has been restarted (a shorter uptime than the rest of the pods) $ kubectl get nodes -o yaml verify the number of devices specifed in the first step is now available.
We check it on 0.15 - works
virt-handler pod are not restarted automatically - only when i delete them then they come up with the new configuration.
> virt-handler pod are not restarted automatically - only when i delete them then they come up with the new configuration. if the daemonset has ``` updateStrategy: type: RollingUpdate ``` set in the spec, then i'd expect the changes to roll out automatically.
> Steps to verify: > $ kubectl -n kubevirt edit ds virt-handler > add "--max-devices XXX" to the list of commands as in comment #6 I just want to point out that this solution isn't reliable moving forward. The cluster-admin (or anything other than the virt-operator) does not own the virt-handler daemonset. Now that we have virt-operator managing the rollout and deployment of the KubeVirt infrastructure, there's no guarantee any modification we make directly to k8s objects created by virt-operator will stick. Virt-operator will eventually re-converge on the daemonset and force it back to the values virt-operator expects. basically, we can't manually edit any k8s objects created by virt-operator. It will look like it's working today, but that will go away in unexpected ways as future releases are used/tested. Instead we have to either use the kubevirt-config configmap, or add an option to the KubeVirt crd. To avoid this entirely for kvm/tun/vhost-net, can we not just set it to something absurdly high like 1028? We're going to be throttled by the pod limit anyway.
The reason it was set to the (default) pod limit in the first place is that each device allocated represents an element in a list. a few thousand shouldn't be an issue, but it shouldn't be open-ended.
As David pointed out, the daemonset not updating automatically is expected behavior (sorry I misled in Comment #8). Per Comment #9, Is there anything else needed before moving this issue to VERIFIED?
Can we open RFE for the updating automatically ? This RFE is on enable to update the devices which is working. Verifying it
As pointed out in Comment #12, virt-operator will be in charge of the state of all KubeVirt components going forward. While changing the runtime state of the virt-handler daemonset was enough to help you with testing so far, it will not and cannot maintain hand-jammed settings in a long-running cluster in future releases. The RFE you really want is a way to manage the number of devices via the operator, but that might be a dead-end in the long run. There's an open question as to why we'd use the device plugin framework going forward -- it's designed for devices that can't be shared, e.g. GPU. I think our energy is much better spent on addressing the root cause of the frustration here. Exposing knobs so the cluster manager can fiddle with artificial numbers is really just a band aid on a workaround. A much shorter-term question is revisit the default number of tun/kvm/vhost-net devices and determine if raising that value is reasonable.