Bug 983481 - Live snapshot creation fails but snapshot is created in system
Live snapshot creation fails but snapshot is created in system
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.3.0
x86_64 Linux
unspecified Severity urgent
: ---
: 3.3.0
Assigned To: Michal Skrivanek
virt
: Regression, TestBlocker, Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-11 06:02 EDT by Jakub Libosvar
Modified: 2015-09-22 09 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-15 06:46:43 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs (4.69 MB, application/x-bzip)
2013-07-11 06:02 EDT, Jakub Libosvar
no flags Details

  None (edit)
Description Jakub Libosvar 2013-07-11 06:02:32 EDT
Created attachment 772125 [details]
logs

Description of problem:
Live snapshots cannot be created when vm is up. 

1626300f-5ddd-40a1-8983-97328b728452::DEBUG::2013-07-11 12:09:47,863::lvm::310::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a%3514f0c542a0029c5|3514f0c542a0029d2|3514f0c542a0029d3|3514f0c542a0029de|3514f0c542a002a72|3514f0c542a002af7|3514f0c542a002afb|3514f0c542a002afc|360060160f4a03000a45adc4c87e9e211%\', \'r%.*%\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --available n 729d6a2b-28c6-4587-a958-f4e33051ffb5/1619183b-bee7-4e0a-aa37-3fdc5e7ff827' (cwd None)
1626300f-5ddd-40a1-8983-97328b728452::DEBUG::2013-07-11 12:09:52,992::lvm::310::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  /dev/mapper/3514f0c542a002af7: read failed after 0 of 4096 at 21474770944: Input/output error\n  /dev/mapper/3514f0c542a002af7: read failed after 0 of 4096 at 21474828288: Input/output error\n  /dev/mapper/3514f0c542a002af7: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/3514f0c542a002af7 was disabled\n  /dev/mapper/3514f0c542a0029c5: read failed after 0 of 4096 at 21474770944: Input/output error\n  /dev/mapper/3514f0c542a0029c5: read failed after 0 of 4096 at 21474828288: Input/output error\n  /dev/mapper/3514f0c542a0029c5: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/3514f0c542a0029c5 was disabled\n  /dev/mapper/360060160f4a03000a45adc4c87e9e211: read failed after 0 of 4096 at 53687025664: Input/output error\n  /dev/mapper/360060160f4a03000a45adc4c87e9e211: read failed after 0 of 4096 at 53687083008: Input/output error\n  /dev/mapper/360060160f4a03000a45adc4c87e9e211: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/360060160f4a03000a45adc4c87e9e211 was disabled\n  /dev/mapper/3514f0c542a0029d2: read failed after 0 of 4096 at 53687025664: Input/output error\n  /dev/mapper/3514f0c542a0029d2: read failed after 0 of 4096 at 53687083008: Input/output error\n  /dev/mapper/3514f0c542a0029d2: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/3514f0c542a0029d2 was disabled\n  /dev/mapper/3514f0c542a0029d3: read failed after 0 of 4096 at 53687025664: Input/output error\n  /dev/mapper/3514f0c542a0029d3: read failed after 0 of 4096 at 53687083008: Input/output error\n  /dev/mapper/3514f0c542a0029d3: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/3514f0c542a0029d3 was disabled\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  Unable to deactivate 729d6a2b--28c6--4587--a958--f4e33051ffb5-1619183b--bee7--4e0a--aa37--3fdc5e7ff827 (253:43)\n'; <rc> = 5

2013-07-11 12:09:55,730 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-5-thread-49) [2380b4c8] START, SnapshotVDSCommand(HostName = 10.35.160.63, HostId = 1f2592bf-a201-4532-a3d8-725e37133c55, vmId=ce1d44a1-4a9e-40ae-8f6c-988b1381241b), log id: 3d16c93a
2013-07-11 12:09:55,738 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-5-thread-49) [2380b4c8] Failed in SnapshotVDS method
2013-07-11 12:09:55,739 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-5-thread-49) [2380b4c8] Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Unexpected exception
2013-07-11 12:09:55,740 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-5-thread-49) [2380b4c8] Command org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=16, mMessage=Unexpected exception]]
2013-07-11 12:09:55,740 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-5-thread-49) [2380b4c8] HostName = 10.35.160.63
2013-07-11 12:09:55,740 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-5-thread-49) [2380b4c8] Command SnapshotVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Unexpected exception

When snapshot creation fails, the snapshot in system is created anyway. It can also be previewed and restored but changes made on VM after snapshot creation are preserved.

Version-Release number of selected component (if applicable):
rhevm-3.3.0-0.7.master.el6ev.noarch
vdsm-4.11.0-121.git082925a.el6.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Try to create live snapshot on VM
2.
3.

Actual results:
Creation fails but the failure is not given to user and it seems that snapshot was created successfully

Expected results:
Snapshot is created successfully

Additional info:
Blocks live snapshot sanity test

Note You need to log in before you can comment on or make changes to this bug.