RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 733373 - [vdsm] Recovery fails in case of running removeDisk task on VM with snapshots [NEEDINFO]
Summary: [vdsm] Recovery fails in case of running removeDisk task on VM with snapshots
Keywords:
Status: CLOSED DUPLICATE of bug 677149
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: rc
: ---
Assignee: Eduardo Warszawski
QA Contact: Jakub Libosvar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-08-25 15:08 UTC by Jakub Libosvar
Modified: 2011-08-28 13:50 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-28 13:50:22 UTC
Target Upstream Version:
Embargoed:
hateya: needinfo?


Attachments (Terms of Use)
truncated vdsm log (50.56 KB, text/x-log)
2011-08-25 15:08 UTC, Jakub Libosvar
no flags Details

Description Jakub Libosvar 2011-08-25 15:08:51 UTC
Created attachment 519902 [details]
truncated vdsm log

Description of problem:
If vdsm is restarted during VM removal process, the recovery process fails.

MainThread::DEBUG::2011-08-25 17:01:28,502::threadPool::25::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2011-08-25 17:01:28,508::spm::214::Storage.SPM::(__cleanupSPMLinks) cleaning links; [] ['/rhev/data-center/011ab524-7985-4051-9990-863eb8a84c0f/tasks']
MainThread::ERROR::2011-08-25 17:01:28,510::clientIF::239::vds::(_initIRS) Traceback (most recent call last): 
  File "/usr/share/vdsm/clientIF.py", line 233, in _initIRS
    self.irs = StorageDispatcher()
  File "/usr/share/vdsm/storage/dispatcher.py", line 135, in __init__
    self.hsm = hsm.HSM()
  File "/usr/share/vdsm/storage/hsm.py", line 138, in __init__
    self.spm = spm.SPM(self.taskMng)
  File "/usr/share/vdsm/storage/spm.py", line 159, in __init__
    self.__cleanupSPMLinks()
  File "/usr/share/vdsm/storage/spm.py", line 218, in __cleanupSPMLinks
    os.unlink(d)
OSError: [Errno 21] Is a directory: '/rhev/data-center/011ab524-7985-4051-9990-863eb8a84c0f/tasks'

Not sure if tasks directory should be empty or not, but it's not:

[root@srh-11 ~]# ll -R /rhev/data-center/011ab524-7985-4051-9990-863eb8a84c0f/tasks
/rhev/data-center/011ab524-7985-4051-9990-863eb8a84c0f/tasks:
total 4
drwxr-xr-x. 2 vdsm kvm 4096 Aug 25 17:01 df03986e-5a95-4626-9732-20cb3ba332a1

/rhev/data-center/011ab524-7985-4051-9990-863eb8a84c0f/tasks/df03986e-5a95-4626-9732-20cb3ba332a1:
total 16
-rw-r--r--. 1 vdsm kvm  83 Aug 25 17:01 df03986e-5a95-4626-9732-20cb3ba332a1.job.0
-rw-r--r--. 1 vdsm kvm 272 Aug 25 17:01 df03986e-5a95-4626-9732-20cb3ba332a1.recover.0
-rw-r--r--. 1 vdsm kvm  62 Aug 25 17:01 df03986e-5a95-4626-9732-20cb3ba332a1.result
-rw-r--r--. 1 vdsm kvm 282 Aug 25 17:01 df03986e-5a95-4626-9732-20cb3ba332a1.task

Version-Release number of selected component (if applicable):
vdsm-4.9-95

How reproducible:
Always

Steps to Reproduce:
1. I have a VM with 5 disks
2. Create a snapshot
3. Start remove process
4. When process is running, restart vdsm on host
  
Actual results:
Host fails to recover

Expected results:
Host recovers

Additional info:
Host can't be used anymore.

vdsm log atached

I'm not sure about regression. At first I thought it is, but then I found out that there has to be snapshot, without snapshot host recovers successfully.

Comment 2 Haim 2011-08-28 13:50:22 UTC

*** This bug has been marked as a duplicate of bug 677149 ***


Note You need to log in before you can comment on or make changes to this bug.