Bug 1966137
Summary: | Custom VMs are not highlighted in must-gather | ||
---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Fabian Deutsch <fdeutsch> |
Component: | Logging | Assignee: | Nahshon Unna-Tsameret <nunnatsa> |
Status: | CLOSED ERRATA | QA Contact: | Satyajit Bulage <sbulage> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 2.6.0 | CC: | cnv-qe-bugs, ipinto, kmajcher, nunnatsa, ocohen, oramraz, rkishner, rnetser, sbulage, stirabos |
Target Milestone: | --- | ||
Target Release: | 4.9.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | hco-bundle-registry-container-v4.9.3-1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-02-22 21:54:59 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2042856, 2042880, 2045143, 2045156 | ||
Bug Blocks: |
Description
Fabian Deutsch
2021-05-31 13:25:30 UTC
@fdeutsch can you please explain what "marked" means? Does it mean adding a label/annotation? if so, for VMs from templates we have labels that can distinguish them. For now I'd only expect in the must gatehr colelctions to separate between VMs created from templates, and others created from scratch - or at least without a template. It would be as simple as putting them in a different section, i.e. """ Template based VMs: rhel8-bar … CUSTOM VMs: my-vm02 … """ As discussed, custom VMs and VMs from templates should reside under 2 separate directories under the relevant ns. The fix include only VM's, no VMI's or any other resource type. 1. Created a custom-vm, and a template-based-vm 2. run: oc adm must-gather --image=quay.io/kubevirt/must-gather -- /usr/bin/gather_virtualmachines 3. verified the VM.yaml files are in 2 separate folders: template-based, custom * For the current time the must-gather command have different code changes depending on the image name effecting some of the folders names. running this command while using other images will bring different results for example: --image=registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-must-gather-rhel8:v4.9.1 Waiting for confirmation if the function should only return VMs without VMI/virt-launcher. Hello Roni, Can you please tell me more about comment 5, what is missing? I did follow same steps mentioned in the BZ description and able to see the virtual machines in the separate folder, as custom and template-based. Thanks, Satyajit. Nothing is missing, this bug is verified for separating the vms to 2 folders, the reason this was not moved to verified, is because a discussion was needed about why we only gather the VM info without the VMI info. Looking at the must-gather repository now i can see a PR was made and merged after the discussion, moving to verify now, thank you. Reopen the bug. The new script is looking for the `vm.kubevirt.io/template` key in annotation instead of under labels, and so all the VMs goes to the custom directory. @n(In reply to Nahshon Unna-Tsameret from comment #9) > Reopen the bug. The new script is looking for the `vm.kubevirt.io/template` > key in annotation instead of under labels, and so all the VMs goes to the > custom directory. Hi Nahshon, how urgent is it can we update the target version for this bug to 4.9.3? @oramraz - I don't really know. I don't understand this separation (custom / template_based). But the current version failed to do that. Moving this for 4.9.3. I believe we can use 4.10 version (that is fixed) must gather on 4.9.x line if needed. When running 4.9.3 version all the VMs are on custom directory. When running 4.10 version all the VMs are on template-based directory. I don't think 4.10 is fixed, I created several VMs in different ways and all of them are in template-based. New fix merged in upstream for 4.10: https://github.com/kubevirt/must-gather/pull/117. Now working on downstream adoption. New cherry-pick for 4.9 is in https://github.com/kubevirt/must-gather/pull/121 Verified 2 folders are being created and VMs are being collected. Was it decided that only VMs should be collected and no VMIs? VMIs are collected as well, but they are not split to custom/VMs. I am able to see the segregation of VM's as follows: ============== For CNV 4.9.3: ============== ---------------------------------------------------------------------------------------------------------------------------------------------------------------+ $ ls default/kubevirt.io/virtualmachines/custom/ default/kubevirt.io/virtualmachines/template-based/ openshift-cnv/kubevirt.io/virtualmachines/template-based/ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------+ Output: ------------------------------------------------------------+ default/kubevirt.io/virtualmachines/custom/: | custom-vm.yaml | | default/kubevirt.io/virtualmachines/template-based/: | vm-example.yaml | | openshift-cnv/kubevirt.io/virtualmachines/template-based/: | vm-template-example-semantic-sparrow.yaml | ------------------------------------------------------------+ ============= For CNV 4.10: ============= -----------------------------------------------------------------------------------------------------+ $ ls default/kubevirt.io/virtualmachines/custom/ default/kubevirt.io/virtualmachines/template-based/ | -----------------------------------------------------------------------------------------------------+ Output: -----------------------------------------------------+ default/kubevirt.io/virtualmachines/custom/: | vm-example-default-ns.yaml vm-example.yaml | | default/kubevirt.io/virtualmachines/template-based/: | vm-template-example-institutional-platypus.yaml | -----------------------------------------------------+ Verifying BZ. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Virtualization 4.9.3 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0641 |