Bug 1438506 - [downstream clone - 4.2.0] while deleting vms created from a template, vdsm command fails with error VDSM command failed: Could not remove all image's volumes
Summary: [downstream clone - 4.2.0] while deleting vms created from a template, vdsm c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: unspecified
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-4.2.0
: ---
Assignee: Adam Litke
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On: 1342550
Blocks: 1502213
TreeView+ depends on / blocked
 
Reported: 2017-04-03 15:05 UTC by rhev-integ
Modified: 2019-05-16 13:03 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1342550
: 1502213 (view as bug list)
Environment:
Last Closed: 2018-05-15 17:51:25 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1489 0 None None None 2018-05-15 17:53:40 UTC

Description rhev-integ 2017-04-03 15:05:31 UTC
+++ This bug is an upstream to downstream clone. The original bug is: +++
+++   bug 1342550 +++
======================================================================

Description of problem:
Have HC setup and running all my vms on gluster storage. When i delete all the vms from the UI , vdsm command fails with the error "VDSM command failed: Could not remove all image's volumes" and seen a Traceback in the vdsm logs.

jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:15,177::fileSD::410::Storage.StorageDomain::(_deleteVolumeFile) Removing file: /rhev/data-center/mnt/glusterSD/10.70.34.35:_
data/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87
jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:16,389::fileSD::410::Storage.StorageDomain::(_deleteVolumeFile) Removing file: /rhev/data-center/mnt/glusterSD/10.70.34.35:_
data/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.meta
jsonrpc.Executor/6::WARNING::2016-06-03 18:50:16,396::fileSD::415::Storage.StorageDomain::(_deleteVolumeFile) File u'/rhev/data-center/mnt/glusterSD/10.70.34.35:_data/e
543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.meta' does not exist: [Errno 2] No such 
file or directory
jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:16,397::fileSD::410::Storage.StorageDomain::(_deleteVolumeFile) Removing file: /rhev/data-center/mnt/glusterSD/10.70.34.35:_
data/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.lease
jsonrpc.Executor/6::WARNING::2016-06-03 18:50:16,399::fileSD::415::Storage.StorageDomain::(_deleteVolumeFile) File u'/rhev/data-center/mnt/glusterSD/10.70.34.35:_data/e
543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a/53fe90be-4ce7-4b25-a2da-e4b8137eac87.lease' does not exist: [Errno 2] No such
 file or directory
jsonrpc.Executor/6::DEBUG::2016-06-03 18:50:16,400::fileSD::402::Storage.StorageDomain::(deleteImage) Removing directory: /rhev/data-center/mnt/glusterSD/10.70.34.35:_d
ata/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a
jsonrpc.Executor/6::ERROR::2016-06-03 18:50:16,402::fileSD::406::Storage.StorageDomain::(deleteImage) removed image dir: /rhev/data-center/mnt/glusterSD/10.70.34.35:_da
ta/e543b4a3-3a65-419d-b9cc-810c3f580fad/images/_remove_me_0769365e-2aed-4a6e-909e-79587d82ab9a can't be removed
jsonrpc.Executor/6::ERROR::2016-06-03 18:50:16,402::task::866::Storage.TaskManager.Task::(_setError) Task=`08e421a9-9cb0-45b1-ad1f-fb65ec9f4b9f`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1544, in deleteImage
    pool.deleteImage(dom, imgUUID, volsByImg)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1982, in deleteImage
    domain.deleteImage(domain.sdUUID, imgUUID, volsByImg)
  File "/usr/share/vdsm/storage/fileSD.py", line 407, in deleteImage
    raise se.ImageDeleteError("%s %s" % (imgUUID, str(e)))
ImageDeleteError: Could not remove all image's volumes: (u'0769365e-2aed-4a6e-909e-79587d82ab9a [Errno 2] No such file or directory',)


Do not see any functional impact.

Version-Release number of selected component (if applicable):
vdsm-4.17.29-0.1.el7ev.noarch

How reproducible:
Hit it once

Steps to Reproduce:
1. Install HC setup
2. Create 9 vms from Template.
3. Delete all the vms created.

Actual results:
All the vms gets deleted leaving an event message " Could not remove all image's volumes" and Traceback in the vdsm logs.

Expected results:
VDSM should not report any errors/failures/Tracebacks.

Additional info:

(Originally by Kasturi Narra)

Comment 1 rhev-integ 2017-04-03 15:05:41 UTC
sosreports from all nodes and engine can be found in the link below:

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/1342550/

(Originally by Kasturi Narra)

Comment 3 rhev-integ 2017-04-03 15:05:47 UTC
Kasturi, is this issue consistently reproducible?

(Originally by Sahina Bose)

Comment 4 rhev-integ 2017-04-03 15:05:54 UTC
(In reply to Sahina Bose from comment #2)
> Kasturi, is this issue consistently reproducible?

Able to reproduce this consistently. I am seeing this issue with 4.0 too

(Originally by Kasturi Narra)

Comment 14 Allon Mureinik 2017-10-01 10:36:38 UTC
Adam, the BZ this depends on, bz1342550 in ON_QA.

Is there any additional AI here, or should this be ON_QA too?

Comment 15 Adam Litke 2017-10-02 12:52:27 UTC
Nothing additional.  The fix from bz1342550 will make it into 4.2 all by itself.

Comment 19 Adam Litke 2017-10-10 13:23:25 UTC
From my understanding we need another clone for 4.1 as this one will already serve as a base for the 4.2 Erratum.  I have already provided a backport here[1].  I just need to wait for the bug to be cloned to 4.1 so I can include the proper bug url in the patch.

Comment 25 RHV bug bot 2017-12-06 16:18:19 UTC
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[No external trackers attached]

For more info please contact: rhv-devops

Comment 26 RHV bug bot 2017-12-12 21:16:58 UTC
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[No external trackers attached]

For more info please contact: rhv-devops

Comment 27 RHV bug bot 2017-12-18 17:06:27 UTC
INFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[No external trackers attached]

For more info please contact: rhv-devops

Comment 32 errata-xmlrpc 2018-05-15 17:51:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1489

Comment 33 Franta Kust 2019-05-16 13:03:45 UTC
BZ<2>Jira Resync


Note You need to log in before you can comment on or make changes to this bug.