Bug 1779664
Summary: | MERGE_STATUS fails with 'Invalid UUID string: mapper' when Direct LUN that already exists is hot-plugged [RHV clone - 4.3.8] | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | RHV bug bot <rhv-bugzilla-bot> |
Component: | ovirt-engine | Assignee: | shani <sleviim> |
Status: | CLOSED ERRATA | QA Contact: | Shir Fishbain <sfishbai> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.3.5 | CC: | aefrat, dfediuck, eshenitz, frolland, gveitmic, michal.skrivanek, mkalinin, pelauter, rbarry, rdlugyhe, Rhev-m-bugs, sleviim, tnisan |
Target Milestone: | ovirt-4.3.8 | Keywords: | ZStream |
Target Release: | 4.3.8 | Flags: | lsvaty:
testing_plan_complete-
|
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | ovirt-engine-4.3.8.2 | Doc Type: | Bug Fix |
Doc Text: |
Previously, when you deleted a snapshot of a VM with a LUN disk, its image ID parsed incorrectly and used "mapper" as its value, which caused a null pointer exception. The current release fixes this issue by avoiding disks whose image ID parses as 'mapper' so deleting the VM snapshot is successful.
|
Story Points: | --- |
Clone Of: | 1750212 | Environment: | |
Last Closed: | 2020-02-13 15:24:42 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1750212 | ||
Bug Blocks: |
Description
RHV bug bot
2019-12-04 13:23:35 UTC
Diego, could you please attach the engine logs from our labs? (Originally by Germano Veit Michel) Benny, looks related to the change you did in live merge lately, we probably don't filter by disk type when we get the volume chain info, please have a look (Originally by Tal Nisan) (In reply to Tal Nisan from comment #3) > Benny, looks related to the change you did in live merge lately, we probably > don't filter by disk type when we get the volume chain info, please have a > look My latest change was introduced only in 4.3.6 this looks like https://bugzilla.redhat.com/show_bug.cgi?id=1598594 (Originally by Benny Zlotnik) (In reply to Benny Zlotnik from comment #4) > (In reply to Tal Nisan from comment #3) > > Benny, looks related to the change you did in live merge lately, we probably > > don't filter by disk type when we get the volume chain info, please have a > > look > > My latest change was introduced only in 4.3.6 > > this looks like https://bugzilla.redhat.com/show_bug.cgi?id=1598594 I agree, this looks like a virt issue (same as bug 1598594) (Originally by Eyal Shenitzky) (In reply to Eyal Shenitzky from comment #5) > (In reply to Benny Zlotnik from comment #4) > > (In reply to Tal Nisan from comment #3) > > > Benny, looks related to the change you did in live merge lately, we probably > > > don't filter by disk type when we get the volume chain info, please have a > > > look > > > > My latest change was introduced only in 4.3.6 > > > > this looks like https://bugzilla.redhat.com/show_bug.cgi?id=1598594 > > I agree, this looks like a virt issue (same as bug 1598594) On this bug the problem seems to be that the Direct LUN has VmDeviceType Disk instead of LUN, and then the engine tries to do volume lookup instead of ignoring it. (Originally by Germano Veit Michel) Indeed seems like a Virt issue, most likely in the Domain XML part of hotplugging the disk which doesn't attach the device properly, Ryan can someone have a look? (Originally by Tal Nisan) LUNs are always hotplugged as type=disk, and they have been for years. There haven't been any changes around that handling since 2017 Is there a reason why snapshot merging is even trying to touch unmanaged storage instead of filtering them out? It seems like a saner solution. (Originally by Ryan Barry) WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops Verified - The delete snapshot succeeds 2019-12-15 11:19:02,212+02 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-9) [11345ff1-9bb2-4032-ba85-983cbdc07874] Successfully merged snapshot '5c9b489c-692a-43b1-b94a-8cff957863c1' images 'b9744599-23d6-4a95-837c-b9ea0db28ad0'..'ff90d51f-e098-47db-804a-293f49e5e999' 2019-12-15 11:19:02,232+02 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-9) [11345ff1-9bb2-4032-ba85-983cbdc07874] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' successfully. 2019-12-15 11:19:02,234+02 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-9) [11345ff1-9bb2-4032-ba85-983cbdc07874] Lock freed to object 'EngineLock:{exclusiveLocks='', sharedLocks='[90c757ea-a9d7-4599-bc75-06dcc6a4fe60=TEMPLATE]'}' 2019-12-15 11:19:03,278+02 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-1) [11345ff1-9bb2-4032-ba85-983cbdc07874] Command 'RemoveSnapshot' id: '539618a2-0b13-479d-8c6c-90376ec8f808' child commands '[ef244aaa-3ac9-4850-b82c-5e4a98324906, 4e28b7fc-dd60-4e8b-92d9-dba950f6562d]' executions were completed, status 'SUCCEEDED' 2019-12-15 11:19:04,317+02 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-13) [11345ff1-9bb2-4032-ba85-983cbdc07874] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' successfully. ovirt-engine-4.3.8.1-0.1.master.el7.noarch vdsm-4.30.39-1.el7ev.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:0498 |