Bug 856203 - [engine-core] During deleting action multiple floating disks and restart vdsm service action at SPM server some disks get locked status
[engine-core] During deleting action multiple floating disks and restart vdsm...
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
x86_64 Linux
unspecified Severity high
: ---
: 3.2.0
Assigned To: Tal Nisan
: 856135 866886 (view as bug list)
Depends On:
  Show dependency treegraph
Reported: 2012-09-11 08:39 EDT by vvyazmin@redhat.com
Modified: 2016-02-10 12:35 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
scohen: Triaged+

Attachments (Terms of Use)
## Logs vdsm, rhevm (1.18 MB, application/x-gzip)
2012-09-11 08:39 EDT, vvyazmin@redhat.com
no flags Details

  None (edit)
Description vvyazmin@redhat.com 2012-09-11 08:39:11 EDT
Created attachment 611756 [details]
## Logs vdsm, rhevm

Description of problem: 
During deleting action multiple floating disks and restart vdsm service action, some disks get "locked" status

Version-Release number of selected component (if applicable):
RHEVM 3.1 - SI17 

RHEVM: rhevm-3.1.0-15.el6ev.noarch 
VDSM: vdsm-4.9.6-32.0.el6_3.x86_64 
LIBVIRT: libvirt-0.9.10-21.el6_3.4.x86_64 
QEMU & KVM: qemu-kvm-rhev- 
SANLOCK: sanlock-2.3-3.el6_3.x86_64

How reproducible:

Steps to Reproduce:
1. Create iSCSI DC with 2 hosts
2. Create 12 floating disk
3. Select them all, and delete
4. During deleting restart “vdsm” service on SPM server (run command #: service vdsmd stop && service vdsmd start)
Actual results:
1. Same disk get “Locked” status
2. No option delete them
3. In DB deleting task is exist (see attached log)
4. No tasks are running on both hosts (run command #: vdsClient -s 0 getAllTasksInfo)

Expected results:
No “Locked” disks
Option remove (delete them) them 
Clean task from DB

Additional info:
In DB “async_tasks” table: “delete” task are running
In VDSM (vdsClient -s 0 getAllTasksInfo) : no tasks are running
In DB “all_disks” table, disk have status: imagestatus == 2
Comment 1 vvyazmin@redhat.com 2012-09-11 08:41:52 EDT
psql -U postgres engine -c 'select disk_id,disk_alias,imagestatus  from all_disks where imagestatus = 2;'  | less -S 

               disk_id                | disk_alias | imagestatus 
 efad1726-9da3-4937-a4f2-8e9f2e9ed37b | A-02       |           2
Comment 2 vvyazmin@redhat.com 2012-09-12 03:54:57 EDT
After 15 hours, “delete” task still running, and not released
Comment 3 Ayal Baron 2012-09-12 04:19:33 EDT
(In reply to comment #2)
> After 15 hours, “delete” task still running, and not released

the task reaper runs after 30 or 50 hours, I don't recall which.
Comment 4 Dafna Ron 2012-09-12 09:50:10 EDT
for dead tasks in vdsm it will run after 50-60 hours.
for engine db clean up it should be about 5 hours.
Comment 5 vvyazmin@redhat.com 2012-10-09 08:15:23 EDT
*** Bug 856135 has been marked as a duplicate of this bug. ***
Comment 6 vvyazmin@redhat.com 2012-10-09 08:20:42 EDT
On verification this bug please run scenario from BZ856135
Comment 7 Haim 2012-10-16 06:12:37 EDT
*** Bug 866886 has been marked as a duplicate of this bug. ***
Comment 8 Tal Nisan 2013-03-11 06:51:43 EDT
Could not reproduce, this patch seems to solve it:
Comment 9 Elad 2013-03-14 10:44:50 EDT
Verified on SF10. When reproduced, vdsm has been restarted during the removing of the 12 disks. The disks became 'illegal' and then I was manage to remove them.
Comment 10 Itamar Heim 2013-06-11 04:32:54 EDT
3.2 has been released
Comment 11 Itamar Heim 2013-06-11 04:32:58 EDT
3.2 has been released
Comment 12 Itamar Heim 2013-06-11 04:33:50 EDT
3.2 has been released
Comment 13 Itamar Heim 2013-06-11 04:42:21 EDT
3.2 has been released

Note You need to log in before you can comment on or make changes to this bug.