Bug 1585013
Summary: | [downstream clone - 4.2.4] ovirt-engine loses track of a cancelled disk | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | RHV bug bot <rhv-bugzilla-bot> |
Component: | ovirt-engine | Assignee: | Daniel Erez <derez> |
Status: | CLOSED ERRATA | QA Contact: | Natalie Gavrielov <ngavrilo> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.2.2 | CC: | derez, ebenahar, gveitmic, ishaby, lsurette, lsvaty, mgoldboi, nsoffer, rbalakri, Rhev-m-bugs, srevivo, tnisan, ykaul, ylavi |
Target Milestone: | ovirt-4.2.4 | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-engine-4.2.4 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1565673 | Environment: | |
Last Closed: | 2018-06-27 10:02:42 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1565673 | ||
Bug Blocks: |
Description
RHV bug bot
2018-06-01 06:59:41 UTC
On engine: ovirt-ansible-cluster-upgrade-1.1.4-1.el7.centos.noarch ovirt-ansible-disaster-recovery-0.1-1.el7.centos.noarch ovirt-ansible-engine-setup-1.1.0-1.el7.centos.noarch ovirt-ansible-image-template-1.1.5-1.el7.centos.noarch ovirt-ansible-infra-1.1.3-1.el7.centos.noarch ovirt-ansible-manageiq-1.1.5-1.el7.centos.noarch ovirt-ansible-repositories-1.1.0-1.el7.centos.noarch ovirt-ansible-roles-1.1.3-1.el7.centos.noarch ovirt-ansible-vm-infra-1.1.4-1.el7.centos.noarch ovirt-cockpit-sso-0.0.4-1.el7.noarch ovirt-engine-4.2.2.5-1.el7.centos.noarch ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch ovirt-engine-backend-4.2.2.5-1.el7.centos.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-engine-dashboard-1.2.2-3.el7.centos.noarch ovirt-engine-dbscripts-4.2.2.5-1.el7.centos.noarch ovirt-engine-dwh-4.2.2.2-1.el7.centos.noarch ovirt-engine-dwh-setup-4.2.2.2-1.el7.centos.noarch ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.noarch ovirt-engine-extensions-api-impl-4.2.2.5-1.el7.centos.noarch ovirt-engine-lib-4.2.2.6-0.0.master.20180322134320.git2ef85b5.el7.centos.noarch ovirt-engine-metrics-1.1.2.2-1.el7.centos.noarch ovirt-engine-restapi-4.2.2.5-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-engine-setup-4.2.2.5-1.el7.centos.noarch ovirt-engine-setup-base-4.2.2.5-1.el7.centos.noarch ovirt-engine-setup-plugin-ovirt-engine-4.2.2.5-1.el7.centos.noarch ovirt-engine-setup-plugin-ovirt-engine-common-4.2.2.5-1.el7.centos.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.2.2.5-1.el7.centos.noarch ovirt-engine-setup-plugin-websocket-proxy-4.2.2.5-1.el7.centos.noarch ovirt-engine-tools-4.2.2.5-1.el7.centos.noarch ovirt-engine-tools-backup-4.2.2.5-1.el7.centos.noarch ovirt-engine-vmconsole-proxy-helper-4.2.2.5-1.el7.centos.noarch ovirt-engine-webadmin-portal-4.2.2.5-1.el7.centos.noarch ovirt-engine-websocket-proxy-4.2.2.5-1.el7.centos.noarch ovirt-engine-wildfly-11.0.0-1.el7.centos.x86_64 ovirt-engine-wildfly-overlay-11.0.1-1.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-host-deploy-java-1.7.3-1.el7.centos.noarch ovirt-imageio-common-1.3.0-0.201804031158.git8c388d1.el7.centos.noarch ovirt-imageio-proxy-1.3.0-0.201804031158.git8c388d1.el7.centos.noarch ovirt-imageio-proxy-setup-1.3.0-0.201804031158.git8c388d1.el7.centos.noarch ovirt-iso-uploader-4.2.0-1.el7.centos.noarch ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch ovirt-provider-ovn-1.2.5-1.el7.centos.noarch ovirt-release42-pre-4.2.2-0.5.rc5.20180320231726.git716ab35.el7.centos.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.noarch ovirt-vmconsole-proxy-1.0.4-1.el7.noarch ovirt-web-ui-1.3.5-1.el7.centos.noarch python-ovirt-engine-sdk4-4.2.4-2.el7.centos.x86_64 On node: cockpit-ovirt-dashboard-0.11.20-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.2-0.1.20180209.gite99bbd1.el7.centos.noarch ovirt-host-4.2.3-0.0.master.20180314072625.gitb93bc6a.el7.centos.x86_64 ovirt-host-dependencies-4.2.3-0.0.master.20180314072625.gitb93bc6a.el7.centos.x86_64 ovirt-host-deploy-1.7.4-0.0.master.20180313171951.git3441821.el7.centos.noarch ovirt-hosted-engine-ha-2.2.10-1.el7.centos.noarch ovirt-hosted-engine-setup-2.2.16-1.el7.centos.noarch ovirt-imageio-common-1.3.0-0.201804031158.git8c388d1.el7.centos.noarch ovirt-imageio-daemon-1.3.0-0.201804031158.git8c388d1.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-0.20180314082503.gitb7e43f0.el7.centos.noarch ovirt-release42-pre-4.2.2-3.el7.centos.noarch ovirt-setup-lib-1.1.5-0.0.master.20180219145311.gitdee3d31.el7.centos.noarch ovirt-vmconsole-1.0.5-0.0.master.20180215132524.gitf24a817.el7.centos.noarch ovirt-vmconsole-host-1.0.5-0.0.master.20180215132524.gitf24a817.el7.centos.noarch python-ovirt-engine-sdk4-4.2.4-2.20180316gita0f4e48.el7.centos.x86_64 vdsm-4.20.23-12.gited79797.el7.centos.x86_64 vdsm-api-4.20.23-12.gited79797.el7.centos.noarch vdsm-client-4.20.23-12.gited79797.el7.centos.noarch vdsm-common-4.20.23-12.gited79797.el7.centos.noarch vdsm-hook-ethtool-options-4.20.23-12.gited79797.el7.centos.noarch vdsm-hook-fcoe-4.20.23-12.gited79797.el7.centos.noarch vdsm-hook-openstacknet-4.20.23-12.gited79797.el7.centos.noarch vdsm-hook-vfio-mdev-4.20.23-12.gited79797.el7.centos.noarch vdsm-hook-vhostmd-4.20.23-12.gited79797.el7.centos.noarch vdsm-hook-vmfex-dev-4.20.23-12.gited79797.el7.centos.noarch vdsm-http-4.20.23-12.gited79797.el7.centos.noarch vdsm-jsonrpc-4.20.23-12.gited79797.el7.centos.noarch vdsm-network-4.20.23-12.gited79797.el7.centos.x86_64 vdsm-python-4.20.23-12.gited79797.el7.centos.noarch vdsm-yajsonrpc-4.20.23-12.gited79797.el7.centos.noarch (Originally by Richard Jones) Created attachment 1419907 [details]
vdsm.log
(Originally by Richard Jones)
Created attachment 1419908 [details]
engine.log
(Originally by Richard Jones)
"Scan disks" does not help. The lost disk is not visible anywhere in the UI. (Originally by Richard Jones) Setting priority/severity to high, since this leak unlimited amount of storage space and the user does not have any way to reclaim the space. (Originally by Nir Soffer) *** Bug 1516903 has been marked as a duplicate of this bug. *** (Originally by Idan Shaby) WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.2.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.2.z': '?'}', ] For more info please contact: rhv-devops (Originally by rhv-bugzilla-bot) Daniel, Does the following scenario verify the fix? 1. Upload disk using python SDK 2. Fail the upload 3. Cancel the upload through UI Should it be tested on both file and block storage types? Was it reproducible/ consistent? (In reply to Natalie Gavrielov from comment #10) > Daniel, > > Does the following scenario verify the fix? > 1. Upload disk using python SDK > 2. Fail the upload > 3. Cancel the upload through UI > > Should it be tested on both file and block storage types? > Was it reproducible/ consistent? The issue here was that the disk wasn't being removed from storage upon failure. It was reproducible and consistent for all storage types. (In reply to Daniel Erez from comment #11) > (In reply to Natalie Gavrielov from comment #10) > > Daniel, > > > > Does the following scenario verify the fix? > > 1. Upload disk using python SDK > > 2. Fail the upload > > 3. Cancel the upload through UI > > > > Should it be tested on both file and block storage types? > > Was it reproducible/ consistent? > > The issue here was that the disk wasn't being removed from storage upon > failure. It was reproducible and consistent for all storage types. So steps 1-2 are sufficient? (assuming that the expected result is for the disk to be removed from the storage) (In reply to Natalie Gavrielov from comment #12) > (In reply to Daniel Erez from comment #11) > > (In reply to Natalie Gavrielov from comment #10) > > > Daniel, > > > > > > Does the following scenario verify the fix? > > > 1. Upload disk using python SDK > > > 2. Fail the upload > > > 3. Cancel the upload through UI > > > > > > Should it be tested on both file and block storage types? > > > Was it reproducible/ consistent? > > > > The issue here was that the disk wasn't being removed from storage upon > > failure. It was reproducible and consistent for all storage types. > > So steps 1-2 are sufficient? > (assuming that the expected result is for the disk to be removed from the > storage) iiuc, the disk is in paused state after step 2, so you should also invoke cancel (from api or UI) to initiate the failure flow. Verified, ovirt-engine-4.2.4.2-0.1.el7_3.noarch Performed scenario described in comment 10, now the disk is removed. (In reply to Natalie Gavrielov from comment #14) > Verified, ovirt-engine-4.2.4.2-0.1.el7_3.noarch > Performed scenario described in comment 10, now the disk is removed. Note: tested on both file and block storage types. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2071 BZ<2>Jira Resync |