| Summary: | [vdsm] SPM start fails as host fails to acquire resource although resource it not locked | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Haim <hateya> |
| Component: | vdsm | Assignee: | Saggi Mizrahi <smizrahi> |
| Status: | CLOSED ERRATA | QA Contact: | Daniel Paikov <dpaikov> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.2 | CC: | abaron, bazulay, hateya, iheim, ilvovsky, jlibosva, sgrinber, yeylon, ykaul |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | storage | ||
| Fixed In Version: | vdsm-4.9-66.el6 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-12-06 07:17:16 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
*** Bug 688523 has been marked as a duplicate of this bug. *** *** Bug 697757 has been marked as a duplicate of this bug. *** verified on - vdsm-4.9-66.el6. resource is not locked. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2011-1782.html |
Description of problem: SPM start fails as host fails to acquire resource on pool stating that resource is already locked, problem is, its not, and resource is not locked. 771aa94b-7d56-4dde-9650-9bb25f58a3cd::DEBUG::2011-05-01 15:38:50,609::persistentDict::208::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=['CLASS= Data', 'DESCRIPTION=PG-NEG-1', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_D ESCRIPTION=PG-iSCS-Neg', 'POOL_DOMAINS=99c8e8a3-949e-4c10-9078-b916471f51c6:Active', 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=0', 'POOL_UUID=4004bae1-9a5d-4dd1-a13c-98 f77cf58cbf', 'PV0=pv:1RH-NEG-00003,uuid:B1I5bd-CF5T-RyK7-4SFB-DA87-MwSS-LHtlcZ,pestart:0,pecount:158,mapoffset:0', 'ROLE=Master', 'SDUUID=99c8e8a3-949e-4c10-90 78-b916471f51c6', 'TYPE=ISCSI', 'VERSION=0', 'VGUUID=YDd1YG-4tz9-3KBu-6sHG-r69u-s92p-ZQIBPp', '_SHA_CKSUM=31a8c82eeb41b9cd50b010927aff49c127ea5bdd'] 771aa94b-7d56-4dde-9650-9bb25f58a3cd::ERROR::2011-05-01 15:38:50,610::task::855::TaskManager.Task::(_setError) Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 863, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 300, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/spm.py", line 509, in start pool.acquireClusterLock() File "/usr/share/vdsm/storage/sp.py", line 200, in acquireClusterLock msd.acquireClusterLock(self.id) File "/usr/share/vdsm/storage/sd.py", line 363, in acquireClusterLock self._clusterLock.acquire(hostID) File "/usr/share/vdsm/storage/safelease.py", line 53, in acquire raise se.DomainAlreadyLocked(self._sdUUID) DomainAlreadyLocked: Cannot acquire lock, resource marked as locked: ('99c8e8a3-949e-4c10-9078-b916471f51c6',) metadata: 1+0 records in 1+0 records out 2048 bytes (2.0 kB) copied, 0.00224396 s, 913 kB/s CLASS=Data DESCRIPTION=PG-NEG-1 IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=1 POOL_DESCRIPTION=PG-iSCS-Neg POOL_DOMAINS=99c8e8a3-949e-4c10-9078-b916471f51c6:Active POOL_SPM_ID=-1 POOL_SPM_LVER=0 POOL_UUID=4004bae1-9a5d-4dd1-a13c-98f77cf58cbf PV0=pv:1RH-NEG-00003,uuid:B1I5bd-CF5T-RyK7-4SFB-DA87-MwSS-LHtlcZ,pestart:0,pecount:158,mapoffset:0 ROLE=Master SDUUID=99c8e8a3-949e-4c10-9078-b916471f51c6 TYPE=ISCSI VERSION=0 VGUUID=YDd1YG-4tz9-3KBu-6sHG-r69u-s92p-ZQIBPp _SHA_CKSUM=31a8c82eeb41b9cd50b010927aff49c127ea5bdd repro steps: - used command dmsetup remove_all