Bug 2097536
Summary: | [RFE] Add disk name and uuid to problems output | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Germano Veit Michel <gveitmic> |
Component: | rhv-log-collector-analyzer | Assignee: | Germano Veit Michel <gveitmic> |
Status: | CLOSED ERRATA | QA Contact: | Barbora Dolezalova <bdolezal> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 4.5.0 | CC: | emarcus, mavital |
Target Milestone: | ovirt-4.5.2 | Keywords: | FieldEngineering, FutureFeature, ZStream |
Target Release: | 4.5.2 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | rhv-log-collector-analyzer-1.0.15 | Doc Type: | Enhancement |
Doc Text: |
In this release, the rhv-log-collector-analyzer now provides a detailed output for each problematic image, including disk names, associated virtual machine, the host running the virtual machine, snapshots, and the current Storage Pool Manager. This makes it easier to identify problematic virtual machines and collect SOS reports for related systems.
The detailed view is now the default, and the compact option can be set by using the --compact switch in the command line.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-09-08 11:28:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Germano Veit Michel
2022-06-15 22:08:11 UTC
I guess 4.5.1 is too late, so probably 4.5.2 [root@rhvm tmp]# ./rhv-image-discrepancies IT IS HIGHLY RECOMMENDED YOU RUN THIS WITH RED HAT SUPPORT INVOLVED Do you want to continue? [y/N] y Running dump-volume-chains 8da39e5e-d33c-49a8-80f7-028bf3de7164 on rhvh-2.toca.local Running dump-volume-chains 8a0c4f4e-1969-47f3-bc73-968ab20b0985 on rhvh-1.toca.local Running dump-volume-chains 3a88a033-3ba2-4963-a63e-dca4c3e591f6 on rhvh-1.toca.local Checking storage domain 'NFS2' (8da39e5e-d33c-49a8-80f7-028bf3de7164) from data-center DDD, SPM is rhvh-2 No problems found Checking storage domain 'NFS' (8a0c4f4e-1969-47f3-bc73-968ab20b0985) from data-center Default, SPM is rhvh-1 Image 3bfe45ef-6238-41e5-8d4d-215b54a24597 Disk: 70afe957-f0ae-4c0d-bf1d-ef04768f1aae (shared_disk) Shared disk attached to VMs: othervm, not running windows, running on rhvh-1 Problem: attribute capacity differs in storage(10737418240) and in database(1073741824) Checking storage domain 'iSCSI' (3a88a033-3ba2-4963-a63e-dca4c3e591f6) from data-center Default, SPM is rhvh-1 Image ce38ea29-b6f4-4c40-8939-a9bc246ea518 Disk: bb429d22-78ae-4ea8-ad54-55c66313a14b Problem: only present in storage Image d45bde93-79f9-4b22-b979-1e777e768c70 Disk: c9072f20-3add-48e6-b927-7a5849c0b8cb (fsdfsdfsd) Problem: only present in database Image d9f05f0f-e575-46c3-86bf-744ca93ac2b2 Disk: 2180a133-1aa4-42e7-9738-7dc325985caa (inc_backup) Attached to VM: windows, running on rhvh-1 Snapshot: LALALAL Problem: size in storage(3221225472) < capacity(5368709120), but is preallocated Total problems found: 4 Due to QE capacity, we are not going to cover this issue in our automation Verified in rhv-log-collector-analyzer-1.0.15-1.el8ev.noarch I ran rhv-image-discrepancies that resulted in desired output (the tool showed disk names, associated vm, host running the vm, snapshots and current SPM): Checking storage domain 'nfs_0' (94a7b213-7942-40e5-94b9-50671ac67c2c) from data-center golden_env_mixed, SPM is host_mixed_2 Image f788a9c6-450b-476c-92b1-ac9d707b14d3 Disk: b0a31669-11b4-4f99-8d88-5ed2aeb01602 (to-break) Attached to VM: golden_env_mixed_virtio_1_0, running on host_mixed_1 Snapshot: Active VM Problem: size in storage(200704) < capacity(1073741824), but is preallocated This bug has low overall severity and is not going to be further verified by QE. If you believe special care is required, feel free to properly align relevant severity, flags and keywords to raise PM_Score or use one of the Bumps ('PrioBumpField', 'PrioBumpGSS', 'PrioBumpPM', 'PrioBumpQA') in Keywords to raise it's PM_Score above verification threashold (1000). Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) [ovirt-4.5.2] bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6393 |