Bug 742764

Summary: Can't create VM from template after SPM failover.
Product: Red Hat Enterprise Linux 6 Reporter: Leonid Natapov <lnatapov>
Component: vdsmAssignee: Igor Lvovsky <ilvovsky>
Status: CLOSED ERRATA QA Contact: Leonid Natapov <lnatapov>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 6.2CC: abaron, bazulay, cpelland, danken, iheim, jgalipea, lpeer, ykaul
Target Milestone: rcKeywords: Regression
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: vdsm-4.9-108 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-12-06 07:29:06 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
vdsm log none

Description Leonid Natapov 2011-10-02 16:22:06 UTC
Created attachment 525945 [details]
vdsm log

vdsm version is 105.

While testing Manual Fence using "Confirm 'Host has been Rebooted'" I had the following test: Reboot the SPM machine while creating pool of VMs. The test was successful and after rebooting SPM machine,new SPM was elected on the other host in DC and VMs that weren't created yet didn't appear in RHEVM at all.

After this test I couldn't create another VMs from that template (the template I used to create a pool). All attempts to create Desktop VM or Server VM with thin allocation failed.

in vdsm.log I get the following error:
--------------------------------------------------------------------------
d3f8894d-3226-4a8a-808b-c6e93cd30d50::ERROR::2011-10-02 18:13:38,494::task::868::TaskManager.Task::(_setError) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 876, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/task.py", line 303, in run
    return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/share/vdsm/storage/spm.py", line 115, in run
    return self.func(*args, **kwargs)
  File "/usr/share/vdsm/storage/spm.py", line 957, in createVolume
    srcImgUUID=srcImgUUID, srcVolUUID=srcVolUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 419, in createVolume
    volUUID, desc, srcImgUUID, srcVolUUID)
  File "/usr/share/vdsm/storage/blockVolume.py", line 166, in create
    raise se.VolumeCannotGetParent("blockVolume can't get parent %s for volume %s: %s" % (srcVolUUID, volUUID, str(e)))
--------------------------------------------------------------------------

vdsm version is 105.
vdsm.log file from SPM host attached.
After checking with Igor it seems that LV 1e6bba77-0d75-46c6-ba4c-8a772ea139f9 is not active.

Here is the lvs from the SPM host:
 LV                                                       VG                                   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  0ab3df93-deb8-426d-a54a-1ae3c3a22472                     0caec7cc-15e2-42bf-b337-52969ee491f1 -wi---  20.00g
  1e6bba77-0d75-46c6-ba4c-8a772ea139f9                     0caec7cc-15e2-42bf-b337-52969ee491f1 -ri---  10.00g
  7582766d-ebf7-4d12-9fbd-2a584ee7ea68                     0caec7cc-15e2-42bf-b337-52969ee491f1 -ri---   1.00g
  807e97fb-c1b4-4658-a488-2e8397254c71                     0caec7cc-15e2-42bf-b337-52969ee491f1 -wi---  10.00g
  8d1cd1f1-e894-4032-b00c-1e299abbe19c                     0caec7cc-15e2-42bf-b337-52969ee491f1 -wi---   1.00g
  9d41d672-f740-43fe-a981-7ff572dfdf48                     0caec7cc-15e2-42bf-b337-52969ee491f1 -wi---   1.00g
  _remove_me_yBFwidUV_55bbccc9-2566-4fb3-8fd4-d2cbb327cf2b 0caec7cc-15e2-42bf-b337-52969ee491f1 -wi---  30.00g
  ab1bc2cf-5a26-4046-86eb-1a091ebc2f9e                     0caec7cc-15e2-42bf-b337-52969ee491f1 -ri---  20.00g
  bd0d180d-c6f1-4708-b08e-8733ad473516                     0caec7cc-15e2-42bf-b337-52969ee491f1 -ri---  10.00g
  e1aa8e87-7648-4425-a72a-e82a1f69f2f3                     0caec7cc-15e2-42bf-b337-52969ee491f1 -ri---  10.00g
  ea914935-bccb-4e65-949c-2b4a59f7fabc                     0caec7cc-15e2-42bf-b337-52969ee491f1 -wi---  10.00g
  ids                                                      0caec7cc-15e2-42bf-b337-52969ee491f1 -wi-a- 128.00m
  inbox                                                    0caec7cc-15e2-42bf-b337-52969ee491f1 -wi-ao 128.00m
  leases                                                   0caec7cc-15e2-42bf-b337-52969ee491f1 -wi-a-   2.00g
  master                                                   0caec7cc-15e2-42bf-b337-52969ee491f1 -wi-ao   1.00g
  metadata                                                 0caec7cc-15e2-42bf-b337-52969ee491f1 -wi-a- 512.00m
  outbox                                                   0caec7cc-15e2-42bf-b337-52969ee491f1 -wi-a- 128.00m
  lv_home                                                  vg0                                  -wi-ao 101.08g
  lv_root                                                  vg0                                  -wi-ao  19.53g
  lv_swap                                                  vg0                                  -wi-ao  15.62g

Comment 2 yeylon@redhat.com 2011-10-02 17:38:10 UTC
Leonid, please specify in the QA Whiteboard: which test case and from which plan.
do you have something in the backend log?

Comment 4 Dan Kenigsberg 2011-10-02 21:02:06 UTC
I'm a bit worried about repeated appearance of

  /dev/mapper/1IET_004f0001: read failed after 0 of 4096 at 12884836352: Input/output error
  /dev/mapper/1IET_004f0001: read failed after 0 of 4096 at 12884893696: Input/output error
  /dev/mapper/1IET_004f0001: read failed after 0 of 4096 at 0: Input/output error
  WARNING: Error counts reached a limit of 3. Device /dev/mapper/1IET_004f0001 was disabled

Would you please reproduce the issue with one line removed?

--- a/vdsm/storage/lvm.py
+++ b/vdsm/storage/lvm.py
@@ -192,7 +192,6 @@ devices {
 preferred_names = ["^/dev/mapper/"]
 ignore_suspended_devices=1
 write_cache_state=0
-disable_after_error_count=3
 %s
 }

Comment 5 Leonid Natapov 2011-10-03 09:55:27 UTC
After removing "disable_after_error_count=3" line from lvm.py
Host couldn't connect to storage pool. 

here is vdsm.log


Thread-981::DEBUG::2011-10-03 11:36:05,494::resourceManager::544::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.f050881a-f4bf-4336-a5b9-be1b15dc5954', Clearing records.
Thread-981::ERROR::2011-10-03 11:36:05,494::task::868::TaskManager.Task::(_setError) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 876, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/hsm.py", line 506, in public_connectStoragePool
    return self._connectStoragePool(spUUID, hostID, scsiKey, msdUUID, masterVersion, options)
  File "/usr/share/vdsm/storage/hsm.py", line 548, in _connectStoragePool
    res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
  File "/usr/share/vdsm/storage/sp.py", line 366, in connect
    self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
  File "/usr/share/vdsm/storage/sp.py", line 824, in __rebuild
    self.masterDomain = self.getMasterDomain(msdUUID=msdUUID, masterVersion=masterVersion)
  File "/usr/share/vdsm/storage/sp.py", line 1129, in getMasterDomain
    raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)


Thread-986::ERROR::2011-10-03 11:36:11,008::task::868::TaskManager.Task::(_setError) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 876, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/spm.py", line 663, in public_getSpmStatus
    pool = hsm.HSM.getPool(spUUID)
  File "/usr/share/vdsm/storage/hsm.py", line 125, in getPool
    raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: ('f050881a-f4bf-4336-a5b9-be1b15dc5954',)

Comment 6 Dan Kenigsberg 2011-10-03 10:47:08 UTC
(In reply to comment #5)
> After removing "disable_after_error_count=3" line from lvm.py
> Host couldn't connect to storage pool. 

This does not make much sense to me... Is it reproducible? If you re-add the line you get the former behavior? What if you change 3 to 300 ?

Comment 7 Leonid Natapov 2011-10-03 10:54:39 UTC
Yes. If I re-add the line I get the former behavior. If I change the value from 3 to 300,I get the former behavior as it was 3.

Comment 8 Igor Lvovsky 2011-10-11 15:36:53 UTC
http://gerrit.usersys.redhat.com/#change,1025

Comment 10 Leonid Natapov 2011-10-24 13:46:04 UTC
ic145.

Comment 11 errata-xmlrpc 2011-12-06 07:29:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2011-1782.html