Bug 1440198 - disk not deleted after failed move
Summary: disk not deleted after failed move
Keywords:
Status: CLOSED DUPLICATE of bug 1434105
Alias: None
Product: vdsm
Classification: oVirt
Component: Gluster
Version: 4.19.11
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: sankarshan
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-04-07 14:28 UTC by bill.james@j2.com
Modified: 2017-04-17 07:11 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-04-17 07:11:57 UTC
oVirt Team: Gluster
Embargoed:


Attachments (Terms of Use)
vdsm.log (972.90 KB, application/x-gzip)
2017-04-07 14:28 UTC, bill.james@j2.com
no flags Details

Description bill.james@j2.com 2017-04-07 14:28:38 UTC
Created attachment 1269784 [details]
vdsm.log

Description of problem:
 When I move a disk with a VM that is running on same server as the
storage it fails.
When I move a disk with VM running on a different system it works.
After move fails it leave disk behind in new location so move can not be retried.

Version-Release number of selected component (if applicable):
 ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64

How reproducible:


Steps to Reproduce:
1. Create local NFS domain on server
2. Create gluster domain with server as one source (replica 3 arbiter 1)
3. Create VM with disk on NFS domain.
4. Move disk from NFS domain to Gluster domain.

Actual results:
2017-04-06 13:31:00,588 ERROR (jsonrpc/6) [virt.vm]
(vmId='e598485a-dc74-43f7-8447-e00ac44dae21') Unable to start
replication for vda to {u'domainID':.... Permission denied

When tried again it says:
2017-04-06 13:49:27,197 INFO  (jsonrpc/1) [dispatcher] Run and protect:
getAllTasksStatuses, Return response: {'allT
asksStatus': {'078d962c-e682-40f9-a177-2a8b479a7d8b': {'code': 212,
'message': 'Volume already exists',

Expected results:
move would work. If it fails it would clean up failed disk.

Additional info:
listing of failed disk on gluster volume:
[root@ovirt1 test images]# ls -lhZa /rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/33db5688-dafe-40ab-9dd0-a826a90c3793
drwxr-xr-x vdsm kvm ?                                .
drwxr-xr-x vdsm kvm ?                                ..
-rw-rw---- vdsm kvm ?                                33c04305-efbe-418a-b42c-07f5f76214f2
-rw-rw---- vdsm kvm ?                                33c04305-efbe-418a-b42c-07f5f76214f2.lease
-rw-r--r-- vdsm kvm ?                                33c04305-efbe-418a-b42c-07f5f76214f2.meta
-rw-rw---- vdsm kvm ?                                38de110d-464c-4735-97ba-3d623ee1a1b6
-rw-rw---- vdsm kvm ?                                38de110d-464c-4735-97ba-3d623ee1a1b6.lease
-rw-r--r-- vdsm kvm ?                                38de110d-464c-4735-97ba-3d623ee1a1b6.meta

see ovirt-users thread "moving disk from one storage domain to another"

Comment 1 Yaniv Lavi 2017-04-12 08:15:16 UTC
Seems related to HCI, moving to Gluster.

Comment 2 Sahina Bose 2017-04-17 07:11:57 UTC
Seems like a dupe of Bug 1434105. Closing this, please re-open if scenario is different

*** This bug has been marked as a duplicate of bug 1434105 ***


Note You need to log in before you can comment on or make changes to this bug.