Bug 916554 - Failed to remove VM's from the storage domain
Summary: Failed to remove VM's from the storage domain
Keywords:
Status: CLOSED DUPLICATE of bug 884635
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
: 3.2.0
Assignee: Ayal Baron
QA Contact:
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-02-28 10:11 UTC by spandura
Modified: 2016-02-10 17:23 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-03-14 15:57:34 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description spandura 2013-02-28 10:11:46 UTC
Description of problem:
========================
Unable to remove VM's from the storage domain after getting an ENOSPACE error when tried to add additional disks to the VM's. 

Version-Release number of selected component (if applicable):
=============================================================
[root@rhs-gp-srv6 ~]# glusterfs --version
glusterfs 3.3.0.5rhs built on Feb 19 2013 05:31:58
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

[root@rhs-gp-srv6 ~]# rpm -qa | grep gluster
glusterfs-3.3.0.5rhs-43.el6rhs.x86_64
glusterfs-devel-3.3.0.5rhs-43.el6rhs.x86_64
glusterfs-rdma-3.3.0.5rhs-43.el6rhs.x86_64
glusterfs-fuse-3.3.0.5rhs-43.el6rhs.x86_64

How reproducible:
=================
Intermittent

Steps to Reproduce:
===================
1. Create a 6 x 2 distribute replicate volume using 4 RHS storage nodes with 3 bricks per node ( Each brick having 600GB )

2. Create a storage domain using the above created volume.  

3. Create VM's .  Start installing VM's
  
4. Add extra disks to VM's. Let the size of the extra disks exceeds the available space in the volume. 

(disks config of all the VM's: 150GB thin provision disk1(root and home), 3-200GB newly added disks)

Actual results:
==============
1. Add-Disks fails with ENOSPACE

2. The VM's installation hangs and VM's are moved to paused state. 

Poweroff all the VM's and  Try to remove the VM's 

Removing the VM's fails. 

Expected results:
===================
Removing the VM's should be successful.

Additional info:
===================

Thread-73174::ERROR::2013-02-28 08:53:52,206::hsm::1431::Storage.HSM::(deleteImage) Empty or not found image b083f8d1-f64c-4891-ad05-2f3cef138a59 in SD 2b63324a-7add-4268-83fe-13908074d67e. {'d24a8909-c327-4283-a81b-95c95d6dfbea': ImgsPar(imgs=('d1021940-6b92-4f1d-a20b-1ef738e5549a',), parent=None), '8950aae4-a1d7-465b-900a-c78db4e4dbc5': ImgsPar(imgs=('eb8d7251-ff7e-4a29-bbc5-58a8c7180315',), parent=None), '1e9e51f9-a064-4942-ab60-ab2a3d0ca14f': ImgsPar(imgs=('261c1a19-06c6-43d6-a22a-7ed1435662fa',), parent=None), '4698aa1d-e7e9-4919-b511-688682822c56': ImgsPar(imgs=('cbe6edf3-f9d9-40eb-8565-0d307f4eead8',), parent=None), '18f83b05-0da5-4ff9-b60a-30e87a7060d3': ImgsPar(imgs=('e2990d1c-739c-43aa-89b7-8f0baf08c29f',), parent=None), '8583a923-bb1a-487b-abb9-2c75fbf33bb5': ImgsPar(imgs=('83dec3e6-3943-419a-b105-f66bb59dc168',), parent=None), 'a4c2efaa-7e84-4136-8c9b-0b031fd3f725': ImgsPar(imgs=('ece4eb6a-af57-4510-b30f-f2165b394cdf',), parent=None), '4ff76189-db07-472d-8d1a-e5aa4bfb89eb': ImgsPar(imgs=('ba78bb69-096b-4278-bd0a-fc019a0e1c0d',), parent=None), '2c4044d0-d37c-441d-8614-6e89797d2b15': ImgsPar(imgs=('87f5d774-2f22-451b-8c40-ffa03ba914a0',), parent=None), '10d6167f-93de-406c-b0fb-17c41a7c9f28': ImgsPar(imgs=('640bc4a8-8cfa-4fa4-acd1-e6e121c10c59',), parent=None), '863f977d-8b51-4b4e-96e1-a575aa8d1401': ImgsPar(imgs=('6b9e049a-86fc-4372-8348-bb07c6ca4d35',), parent=None), 'be18cda9-5063-41b8-a3e2-6c7e3e3419b5': ImgsPar(imgs=('a70958ca-d76f-40a0-b332-4dbe8996f20c',), parent=None), 'e7ec3e7d-30e4-4273-b525-0904530933f9': ImgsPar(imgs=('53a8d3fb-e3a6-4a7f-b20f-c49952a44683',), parent=None), 'a92e516d-448e-43eb-99db-2824c29a51d8': ImgsPar(imgs=('df7b8e46-9bd3-4489-baff-da2c7d738043',), parent=None), 'ed9a987f-6f65-4905-b3cd-0c675ee793a5': ImgsPar(imgs=('0c4aa9b5-922f-416e-8b6c-de8c06097bcb',), parent=None), '50d14aaf-aabb-4d81-9f16-0f70565476a3': ImgsPar(imgs=('e090f3f2-7bb2-4e52-928c-10980bcdb379',), parent=None)}
Thread-73174::ERROR::2013-02-28 08:53:52,206::task::833::TaskManager.Task::(_setError) Task=`8d1741cb-a9b2-4665-97eb-9eeec223e24e`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 840, in _run
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
  File "/usr/share/vdsm/storage/hsm.py", line 1432, in deleteImage
ImageDoesNotExistInSD: Image does not exist in domain: 'image=b083f8d1-f64c-4891-ad05-2f3cef138a59, domain=2b63324a-7add-4268-83fe-13908074d67e'


The image=b083f8d1-f64c-4891-ad05-2f3cef138a59,corresponds to the "rhs_vm2" (refer to snapshot) . The VM still exist.

Comment 3 Shireesh 2013-03-11 11:58:43 UTC
RHS vdsm comes into picture only for managing gluster volumes. So I would suggest changing the product/component to RHEV-M's vdsm. (Currently it is a component under RHEL product)

Comment 4 Bala.FA 2013-03-14 08:43:38 UTC
Useful vdsm log is in sosreport-rhs-gp-srv6.916554-20130228101254-199f.tar.xz

Comment 5 Dan Kenigsberg 2013-03-14 09:03:32 UTC
Which vdsm version is this?

Comment 7 spandura 2013-03-14 09:29:19 UTC
[root@rhs-gp-srv6 ~]# rpm -qa | grep vdsm
vdsm-cli-4.10.2-1.8.el6ev.noarch
vdsm-4.10.2-1.8.el6ev.x86_64
vdsm-hook-vhostmd-4.10.2-1.8.el6ev.noarch
vdsm-xmlrpc-4.10.2-1.8.el6ev.noarch
vdsm-reg-4.10.2-1.8.el6ev.noarch
vdsm-python-4.10.2-1.8.el6ev.x86_64

Comment 8 Maor 2013-03-14 15:57:34 UTC
When image does not exist in domain, engine gets an exception from VDSM at the task creation stage, and fails to remove the disk.
This could lead to failure in the remove VM process, and should be fixed as part of BZ884635

*** This bug has been marked as a duplicate of bug 884635 ***


Note You need to log in before you can comment on or make changes to this bug.