Bug 829645
Summary: | 3.1 - [vdsm] VM moves to paused when trying to create live snapshot on iscsi domain | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Kiril Nesenko <knesenko> | ||||
Component: | vdsm | Assignee: | Federico Simoncelli <fsimonce> | ||||
Status: | CLOSED ERRATA | QA Contact: | vvyazmin <vvyazmin> | ||||
Severity: | unspecified | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 6.3 | CC: | abaron, aburden, bazulay, cpelland, dfediuck, ewarszaw, hateya, iheim, ilvovsky, jbiddle, sgrinber, ykaul | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | storage | ||||||
Fixed In Version: | vdsm-4.9.6-19.0 | Doc Type: | Bug Fix | ||||
Doc Text: |
Previously, virtual machines would move to a paused state when trying to create live snapshot on an iSCSI domain.
Now, VDSM is changing the current leaf permission to read-only when creating a new volume. This fixes the issue of unwanted paused states.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2012-12-04 18:58:59 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Kiril Nesenko
2012-06-07 08:14:05 UTC
Created attachment 590115 [details]
logs
(In reply to comment #0) > Description of problem: > VM moves to paused when trying to create live snapshot on iscsi domain > > Thread-546851::WARNING::2012-06-07 > 10:50:16,816::blockVolume::99::Storage.Volume::(getVSize) Could not get size > for vol > d9e5117e-6137-4296-9cf5-a291df7e735f/2d71b227-7bdb-4f43-9ca1-3c5ccab84b25 > using optimized methods > Traceback (most recent call last): > File "/usr/share/vdsm/storage/blockVolume.py", line 92, in getVSize > return _getDeviceSize(lvm.lvPath(sdobj.sdUUID, volUUID)) / bs > File "/usr/share/vdsm/storage/blockVolume.py", line 63, in _getDeviceSize > with open(devPath, "rb") as f: > IOError: [Errno 2] No such file or directory: > '/dev/d9e5117e-6137-4296-9cf5-a291df7e735f/2d71b227-7bdb-4f43-9ca1- > 3c5ccab84b25' > Thread-546851::WARNING::2012-06-07 > 10:50:16,817::blockVolume::99::Storage.Volume::(getVSize) Could not get size > for vol > d9e5117e-6137-4296-9cf5-a291df7e735f/2d71b227-7bdb-4f43-9ca1-3c5ccab84b25 > using optimized methods > Traceback (most recent call last): > File "/usr/share/vdsm/storage/blockVolume.py", line 92, in getVSize > return _getDeviceSize(lvm.lvPath(sdobj.sdUUID, volUUID)) / bs > File "/usr/share/vdsm/storage/blockVolume.py", line 63, in _getDeviceSize > with open(devPath, "rb") as f: > IOError: [Errno 2] No such file or directory: > '/dev/d9e5117e-6137-4296-9cf5-a291df7e735f/2d71b227-7bdb-4f43-9ca1- > 3c5ccab84b25' We already discussed these warnings, they're unrelated (and irrelevant in general). I'll send a separate patch to remove these tracebacks (useless anyway) to make the warnings less scary. The real issue is: Thread-546834::INFO::2012-06-07 10:50:02,439::logUtils::37::dispatcher::(wrapper) Run and protect: createVolume(sdUUID='d9e5117e-6137-4296-9cf5-a291df7e735f', spUUID='7a844083-ce09-4cdf-b5d9-c699657c4b8a', imgUUID='2fd141b4-cffd-47a6-9ea3-3522585bab5c', size='5368709120', volFormat=4, preallocate=2, diskType=2, volUUID='2d71b227-7bdb-4f43-9ca1-3c5ccab84b25', desc='', srcImgUUID='2fd141b4-cffd-47a6-9ea3-3522585bab5c', srcVolUUID='4bccf789-62b5-4926-a8f2-7eed0191c575') [...] Thread-546834::DEBUG::2012-06-07 10:50:02,899::taskManager::66::TaskManager::(scheduleJob) scheduled job createVolume for task eb995d60-d7d7-4369-b667-1a7941273c3c [...] eb995d60-d7d7-4369-b667-1a7941273c3c::DEBUG::2012-06-07 10:50:03,503::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm lvcreate --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%3600144f051a2450000004dcc59a60003|3600144f051a2450000004dcc59b00004|3600144f051a2450000004dcc59ba0005|3600144f051a2450000004dcc59ca0006|3600144f051a2450000004dcc5a340007|3600144f051a2450000004dcc5a3d0008|3600144f051a2450000004dcc5a490009%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --contiguous n --size 1024m --name 2d71b227-7bdb-4f43-9ca1-3c5ccab84b25 d9e5117e-6137-4296-9cf5-a291df7e735f' (cwd None) [...] eb995d60-d7d7-4369-b667-1a7941273c3c::DEBUG::2012-06-07 10:50:03,913::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%3600144f051a2450000004dcc59a60003|3600144f051a2450000004dcc59b00004|3600144f051a2450000004dcc59ba0005|3600144f051a2450000004dcc59ca0006|3600144f051a2450000004dcc5a340007|3600144f051a2450000004dcc5a3d0008|3600144f051a2450000004dcc5a490009%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --permission r d9e5117e-6137-4296-9cf5-a291df7e735f/4bccf789-62b5-4926-a8f2-7eed0191c575' (cwd None) [...] Thread-546843::DEBUG::2012-06-07 10:50:09,581::BindingXMLRPC::870::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Paused', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '3620', 'displayIp': '0', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': u'5901', 'timeOffset': '0', 'hash': '5864640714274609002', 'pauseCode': 'EPERM', 'clientIp': '10.35.1.90', 'kvmEnable': 'true', 'network': {u'vnet0': {'macAddr': '00:1a:4a:23:11:0e', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmId': '38bde490-bf53-462b-a450-9f2b6aaab086', 'displayType': 'qxl', 'cpuUser': '6.47', 'disks': {u'vda': {'readLatency': '0', 'apparentsize': '3221225472', 'writeLatency': '726164', 'imageID': '2fd141b4-cffd-47a6-9ea3-3522585bab5c', 'flushLatency': '2922889', 'readRate': '0.00', 'truesize': '3221225472', 'writeRate': '266.28'}, u'hdc': {'flushLatency': '0', 'readLatency': '0', 'writeLatency': '0'}}, 'monitorResponse': '0', 'statsAge': '0.21', 'cpuIdle': '93.53', 'elapsedTime': '214', 'vmType': 'kvm', 'cpuSys': '0.00', 'appsList': [], 'guestIPs': '', 'nice': ''}]} This issue is known since a long time. VDSM is changing the current leaf permission to read-only when you create a new volume. And in fact the VM pauses with the EPERM error (see pauseCode: 'EPERM'). For some time we included some code to overcome this problem but it was removed in 7281bb2 (4.9.6-11). We are about to reintroduce the fix in: http://gerrit.ovirt.org/#/c/2786/ Verified on RHEVM 3.1 - SI10 RHEV-M: rhevm-3.1.0-6.el6ev.noarch VDSM: vdsm-4.9.6-21.0.el6_3.x86_64 LIBVIRT: libvirt-0.9.10-21.el6_3.1.x86_64 QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.295.el6.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-1508.html |