Bug 1502213 - [downstream clone - 4.1.7] [downstream clone - 4.2.0] while deleting vms created from a template, vdsm command fails with error VDSM command failed: Could not remove all image's volumes
Summary: [downstream clone - 4.1.7] [downstream clone - 4.2.0] while deleting vms crea...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: unspecified
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-4.1.7
: ---
Assignee: Adam Litke
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On: 1438506
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-15 10:25 UTC by rhev-integ
Modified: 2021-05-01 16:22 UTC (History)
17 users (show)

Fixed In Version: v4.19.34
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1438506
Environment:
Last Closed: 2017-11-07 17:29:21 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:3139 0 normal SHIPPED_LIVE VDSM bug fix and enhancement update 4.1.7 2017-11-07 22:22:40 UTC
oVirt gerrit 82629 0 'None' MERGED fileSD: Gracefully handle purgeImage delete race 2020-07-03 00:43:35 UTC

Description rhev-integ 2017-10-15 10:25:38 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1438506 +++
======================================================================

+++ This bug is an upstream to downstream clone. The original bug is: +++
+++   bug 1342550 +++
======================================================================

Description of problem:
Have HC setup and running all my vms on gluster storage. When i delete all the vms from the UI , vdsm command fails with the error "VDSM command failed: Could not remove all image's volumes" and seen a Traceback in the vdsm logs.

jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:15,177::fileSD::410::Storage.StorageDomain::(_deleteVolumeFile) Removing file: /rhev/data-center/mnt/glusterSD/10.70.34.35:_
data/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87
jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:16,389::fileSD::410::Storage.StorageDomain::(_deleteVolumeFile) Removing file: /rhev/data-center/mnt/glusterSD/10.70.34.35:_
data/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.meta
jsonrpc.Executor/6::WARNING::2016-06-03 18:50:16,396::fileSD::415::Storage.StorageDomain::(_deleteVolumeFile) File u'/rhev/data-center/mnt/glusterSD/10.70.34.35:_data/e
543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.meta' does not exist: [Errno 2] No such 
file or directory
jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:16,397::fileSD::410::Storage.StorageDomain::(_deleteVolumeFile) Removing file: /rhev/data-center/mnt/glusterSD/10.70.34.35:_
data/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.lease
jsonrpc.Executor/6::WARNING::2016-06-03 18:50:16,399::fileSD::415::Storage.StorageDomain::(_deleteVolumeFile) File u'/rhev/data-center/mnt/glusterSD/10.70.34.35:_data/e
543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.lease' does not exist: [Errno 2] No such
 file or directory
jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:16,400::fileSD::402::Storage.StorageDomain::(deleteImage) Removing directory: /rhev/data-center/mnt/glusterSD/10.70.34.35:_d
ata/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a
jsonrpc.Executor/6::ERROR::2016-06-03 18:50:16,402::fileSD::406::Storage.StorageDomain::(deleteImage) removed image dir: /rhev/data-center/mnt/glusterSD/10.70.34.35:_da
ta/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a can't be removed
jsonrpc.Executor/6::ERROR::2016-06-03 18:50:16,402::task::866::Storage.TaskManager.Task::(_setError) Task=`08e421a9-9cb0-45b1-ad1f-fb65ec9f4b9f`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1544, in deleteImage
    pool.deleteImage(dom, imgUUID, volsByImg)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1982, in deleteImage
    domain.deleteImage(domain.sdUUID, imgUUID, volsByImg)
  File "/usr/share/vdsm/storage/fileSD.py", line 407, in deleteImage
    raise se.ImageDeleteError("%s %s" % (imgUUID, str(e)))
ImageDeleteError: Could not remove all image's volumes: (u'0769365e-2aed-4a6e-909e-79587d82ab9a [Errno 2] No such file or directory',)


Do not see any functional impact.

Version-Release number of selected component (if applicable):
vdsm-4.17.29-0.1.el7ev.noarch

How reproducible:
Hit it once

Steps to Reproduce:
1. Install HC setup
2. Create 9 vms from Template.
3. Delete all the vms created.

Actual results:
All the vms gets deleted leaving an event message " Could not remove all image's volumes" and Traceback in the vdsm logs.

Expected results:
VDSM should not report any errors/failures/Tracebacks.

Additional info:

(Originally by Kasturi Narra)

(Originally by rhev-integ)

Comment 1 rhev-integ 2017-10-15 10:25:47 UTC
sosreports from all nodes and engine can be found in the link below:

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/1342550/

(Originally by Kasturi Narra)

(Originally by rhev-integ)

Comment 4 rhev-integ 2017-10-15 10:25:59 UTC
Kasturi, is this issue consistently reproducible?

(Originally by Sahina Bose)

(Originally by rhev-integ)

Comment 5 rhev-integ 2017-10-15 10:26:06 UTC
(In reply to Sahina Bose from comment #2)
> Kasturi, is this issue consistently reproducible?

Able to reproduce this consistently. I am seeing this issue with 4.0 too

(Originally by Kasturi Narra)

(Originally by rhev-integ)

Comment 15 rhev-integ 2017-10-15 10:27:07 UTC
Adam, the BZ this depends on, bz1342550 in ON_QA.

Is there any additional AI here, or should this be ON_QA too?

(Originally by Allon Mureinik)

Comment 16 rhev-integ 2017-10-15 10:27:13 UTC
Nothing additional.  The fix from bz1342550 will make it into 4.2 all by itself.

(Originally by Adam Litke)

Comment 20 rhev-integ 2017-10-15 10:27:36 UTC
From my understanding we need another clone for 4.1 as this one will already serve as a base for the 4.2 Erratum.  I have already provided a backport here[1].  I just need to wait for the bug to be cloned to 4.1 so I can include the proper bug url in the patch.

(Originally by Adam Litke)

Comment 25 RamaKasturi 2017-10-26 09:24:18 UTC
Hi,

  Can you please put in the FIV for this bug ?

Thanks
kasturi

Comment 27 RamaKasturi 2017-10-26 13:44:50 UTC
Fixed In Version

Comment 28 RamaKasturi 2017-10-31 13:07:37 UTC
Verified and works fine with build vdsm-4.19.35-1.el7ev.x86_64.

Created 29 vms and deleted them. In the events tab i do not see any failure related to not able to remove vm images + no Tracebacks which is reported in the Bug description.

But i do see another Traceback which i have asked for needinfo on adam in the other bug https://bugzilla.redhat.com/show_bug.cgi?id=1342550

Comment 30 errata-xmlrpc 2017-11-07 17:29:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3139

Comment 31 Daniel Gur 2019-08-28 13:11:51 UTC
sync2jira

Comment 32 Daniel Gur 2019-08-28 13:16:04 UTC
sync2jira


Note You need to log in before you can comment on or make changes to this bug.