Bug 1143830
Summary: | Wrong error message when failing to create a volume on block storage | ||||||
---|---|---|---|---|---|---|---|
Product: | [Retired] oVirt | Reporter: | Vered Volansky <vered> | ||||
Component: | vdsm | Assignee: | Nir Soffer <nsoffer> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Gil Klein <gklein> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 3.5 | CC: | amureini, bazulay, bugs, danken, ebenahar, ecohen, gklein, iheim, jmoskovc, lsurette, mgoldboi, rbalakri, sbonazzo, yeylon | ||||
Target Milestone: | --- | ||||||
Target Release: | 3.5.1 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | storage | ||||||
Fixed In Version: | ovirt-3.5.1_rc1 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2015-01-21 16:04:38 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1193195 | ||||||
Attachments: |
|
Description
Vered Volansky
2014-09-18 05:48:41 UTC
The VM has not been started. Checking the vdsm logs I can see that the VM has not been started due to: Thread-82::ERROR::2014-09-18 08:26:18,845::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Logical volume does not exist: ('86ddf5b3-3e57-47e0-8823-e1561d32afc9/4bc8d1bd-3a6e-4363-81a3-b15dc12cc304',)", 'code': 610}} Thread-82::DEBUG::2014-09-18 08:26:18,845::vm::2289::vm.Vm::(_startUnderlyingVm) vmId=`ab9cfe11-980d-454c-bf76-f984734da397`::_ongoingCreations released Thread-82::ERROR::2014-09-18 08:26:18,845::vm::2326::vm.Vm::(_startUnderlyingVm) vmId=`ab9cfe11-980d-454c-bf76-f984734da397`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2266, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3301, in _run devices = self.buildConfDevices() File "/usr/share/vdsm/virt/vm.py", line 2063, in buildConfDevices self._normalizeVdsmImg(drv) File "/usr/share/vdsm/virt/vm.py", line 1986, in _normalizeVdsmImg drv['volumeID']) StorageUnavailableError: ('Failed to get size for volume %s', '4bc8d1bd-3a6e-4363-81a3-b15dc12cc304') The original error was: 75e88a4c-6162-4d50-9274-06fcf6794f52::DEBUG::2014-09-18 08:25:38,514::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/3600144f09dbd0500000050f6881d0001|/dev/mapper/3600144f09dbd0500000054196bb30017|/dev/mapper/3600144f09dbd0500000054196bb60018|/dev/mapper/3600144f09dbd0500000054196bb80019|/dev/mapper/3600144f09dbd0500000054196bba001a|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --contiguous n --size 25600m --addtag OVIRT_VOL_INITIALIZING --name 4bc8d1bd-3a6e-4363-81a3-b15dc12cc304 86ddf5b3-3e57-47e0-8823-e1561d32afc9 (cwd None) Thread-36::DEBUG::2014-09-18 08:25:38,514::__init__::232::IOProcess::(_processLogs) (565) Got request for method 'lexists' Thread-36::DEBUG::2014-09-18 08:25:38,518::__init__::232::IOProcess::(_processLogs) (565) Queuing response 75e88a4c-6162-4d50-9274-06fcf6794f52::DEBUG::2014-09-18 08:25:38,539::lvm::288::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n Volume group "86ddf5b3-3e57-47e0-8823-e1561d32afc9" has insufficient free space (124 extents): 200 required.\n'; <rc> = 5 75e88a4c-6162-4d50-9274-06fcf6794f52::ERROR::2014-09-18 08:25:38,540::volume::438::Storage.Volume::(create) Failed to create volume: /rhev/data-center/b99bafd8-95a7-49c4-a6df-dd4b786aba07/86ddf5b3-3e57-47e0-8823-e1561d32afc9/images/8fc9adce-9225-4db8-907b-b637e989b647/4bc8d1bd-3a6e-4363-81a3-b15dc12cc304, volume already exists 75e88a4c-6162-4d50-9274-06fcf6794f52::ERROR::2014-09-18 08:25:38,540::volume::474::Storage.Volume::(create) Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/volume.py", line 440, in create raise e CannotCreateLogicalVolume: Cannot create Logical Volume: ('86ddf5b3-3e57-47e0-8823-e1561d32afc9', '4bc8d1bd-3a6e-4363-81a3-b15dc12cc304') Now it's not clear why Logical volume does not exist at VM startup when volume already exists at volume creation. Moving to storage / vdsm. *** Bug 1136199 has been marked as a duplicate of this bug. *** Looks like it's reproducible, see bug #1136199 Sandro, I don't follow. Vdsm is pretty clear that volume creation (of size 25600m!) failed due to Volume group "86ddf5b3-3e57-47e0-8823-e1561d32afc9" has insufficient free space (124 extents): 200 required.\n Did Vdsm report success of LV creation? Dan what about: 75e88a4c-6162-4d50-9274-06fcf6794f52::ERROR::2014-09-18 08:25:38,540::volume::438::Storage.Volume::(create) Failed to create volume: /rhev/data-center/b99bafd8-95a7-49c4-a6df-dd4b786aba07/86ddf5b3-3e57-47e0-8823-e1561d32afc9/images/8fc9adce-9225-4db8-907b-b637e989b647/4bc8d1bd-3a6e-4363-81a3-b15dc12cc304, volume already exists ? That's from the other bug: these are two different modes of failures. It seems to me that Vered's issue (this one) is simply because of a VG that is too small, and because ovirt-hosted-engine-setup somehow ignored Vdsm's failure. Bug 1136199 seems more complex; could it be that ovirt-hosted-engine-setup generated LV uuid, and re-used it for a re-installation attempt? (it may be a good idea per-se, provided that the admin knows what she's doing. In any case, they don't seem duplicates. (In reply to Dan Kenigsberg from comment #6) > That's from the other bug: these are two different modes of failures. It > seems to me that Vered's issue (this one) is simply because of a VG that is > too small, and because ovirt-hosted-engine-setup somehow ignored Vdsm's > failure. Just in case you missed it from comment#0, I did use a 20G iscsi target. The prompt suggested that the minimum is 25G, but this is what I had available at the time, and actually, I thought I'd be blocked when selecting this target is it's too small. So it may have unraveled other issues, but it actually was too small. BTW, why not just disallow this attempt on the fly? (In reply to Sandro Bonazzola from comment #5) > Dan what about: > 75e88a4c-6162-4d50-9274-06fcf6794f52::ERROR::2014-09-18 > 08:25:38,540::volume::438::Storage.Volume::(create) Failed to create volume: > /rhev/data-center/b99bafd8-95a7-49c4-a6df-dd4b786aba07/86ddf5b3-3e57-47e0- > 8823-e1561d32afc9/images/8fc9adce-9225-4db8-907b-b637e989b647/4bc8d1bd-3a6e- > 4363-81a3-b15dc12cc304, volume already exists ? This is just bad logging, which patch 33301 sovles. (In reply to Vered Volansky from comment #7) > (In reply to Dan Kenigsberg from comment #6) > > That's from the other bug: these are two different modes of failures. It > > seems to me that Vered's issue (this one) is simply because of a VG that is > > too small, and because ovirt-hosted-engine-setup somehow ignored Vdsm's > > failure. > > Just in case you missed it from comment#0, I did use a 20G iscsi target. > The prompt suggested that the minimum is 25G, but this is what I had > available at the time, and actually, I thought I'd be blocked when selecting > this target is it's too small. So it may have unraveled other issues, but it > actually was too small. BTW, why not just disallow this attempt on the fly? I just tried to reproduce this and setup correctly refused to continue with only 20G iscsi target: [ERROR] Error: device <device name> has capacity of only 20000Mb while a minumum of 20480Mb is required Please specify the target name <name of the targets>: rpm pversion: ovirt-hosted-engine-setup-1.2.1-1.fc20.noarch (the original report is against 1.2.0, so I'd suggest to retry with the latest update) This is an automated message: This bug should be fixed in oVirt 3.5.1 RC1, moving to QA oVirt 3.5.1 has been released. If problems still persist, please make note of it in this bug report. |