Bug 1057221
| Summary: | [engine] Files created on stateless vm are retained after powering off and back on | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Retired] oVirt | Reporter: | Gadi Ickowicz <gickowic> | ||||
| Component: | ovirt-engine-core | Assignee: | Daniel Erez <derez> | ||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Leonid Natapov <lnatapov> | ||||
| Severity: | urgent | Docs Contact: | |||||
| Priority: | urgent | ||||||
| Version: | 3.4 | CC: | acathrow, amureini, gklein, iheim, michal.skrivanek, nlevinki, yeylon | ||||
| Target Milestone: | --- | Keywords: | Regression | ||||
| Target Release: | 3.4.0 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | storage | ||||||
| Fixed In Version: | ovirt-3.4.0-beta2 | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2014-03-31 12:28:06 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
The old stateless snapshots are never removed, 2 new ones are created each time: [root@gold-vdsd ~]# lvs 26822e47-465f-4378-bd11-1be48261ad3f -o lv_name,lv_attr,lv_tags | wc -l 29 [root@gold-vdsd ~]# lvs 26822e47-465f-4378-bd11-1be48261ad3f -o lv_name,lv_attr,lv_tags | wc -l 31 [root@gold-vdsd ~]# lvs 26822e47-465f-4378-bd11-1be48261ad3f -o lv_name,lv_attr,lv_tags | wc -l 33 From the engine logs, it seems only the internal command is run, and is never sent to VDSM: 2014-01-23 18:30:36,033 INFO [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6940f4da] START, DestroyVmVDSCommand(HostName = gold-vdsd.qa.lab.tlv.redhat.com, HostId = 3d 39f9c6-ef80-4107-99f9-a2b370f6db5a, vmId=02153b19-6cb9-4971-b653-ccef5e043a84, force=false, secondsToWait=0, gracefully=false), log id: 77d621e9 2014-01-23 18:30:36,040 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6940f4da] START, DestroyVDSCommand(HostName = gold-vdsd.qa.lab.tlv.redhat.com, HostI d = 3d39f9c6-ef80-4107-99f9-a2b370f6db5a, vmId=02153b19-6cb9-4971-b653-ccef5e043a84, force=false, secondsToWait=0, gracefully=false), log id: 5c3678fe 2014-01-23 18:30:38,242 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6940f4da] FINISH, DestroyVDSCommand, log id: 5c3678fe 2014-01-23 18:30:38,248 INFO [org.ovirt.engine.core.bll.VmPoolHandler] (org.ovirt.thread.pool-6-thread-39) [6940f4da] VdcBll.VmPoolHandler.processVmPoolOnStopVm - Deleting snapshot for stateless vm 02153b19-6cb9- 4971-b653-ccef5e043a84 2014-01-23 18:30:38,253 INFO [org.ovirt.engine.core.bll.RestoreStatelessVmCommand] (org.ovirt.thread.pool-6-thread-39) [13c4cdd] Running command: RestoreStatelessVmCommand internal: true. Entities affected : ID: 02153b19-6cb9-4971-b653-ccef5e043a84 Type: VM 2014-01-23 18:30:38,266 INFO [org.ovirt.engine.core.bll.RestoreAllSnapshotsCommand] (org.ovirt.thread.pool-6-thread-39) [59b16880] Lock Acquired to object EngineLock [exclusiveLocks= key: 02153b19-6cb9-4971-b653- ccef5e043a84 value: VM , sharedLocks= ] 2014-01-23 18:30:38,293 INFO [org.ovirt.engine.core.bll.RestoreAllSnapshotsCommand] (org.ovirt.thread.pool-6-thread-39) [59b16880] Running command: RestoreAllSnapshotsCommand internal: true. Entities affected : ID: 02153b19-6cb9-4971-b653-ccef5e043a84 Type: VM 2014-01-23 18:30:38,294 INFO [org.ovirt.engine.core.bll.RestoreAllSnapshotsCommand] (org.ovirt.thread.pool-6-thread-39) [59b16880] Locking VM(id = 02153b19-6cb9-4971-b653-ccef5e043a84) without compensation. 2014-01-23 18:30:38,301 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [59b16880] START, SetVmStatusVDSCommand( vmId = 02153b19-6cb9-4971-b653-ccef5e043a84, statu s = ImageLocked), log id: 6ca1af2 2014-01-23 18:30:38,304 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [59b16880] FINISH, SetVmStatusVDSCommand, log id: 6ca1af2 2014-01-23 18:30:38,637 INFO [org.ovirt.engine.core.bll.RestoreAllSnapshotsCommand] (org.ovirt.thread.pool-6-thread-39) [59b16880] Lock freed to object EngineLock [exclusiveLocks= key: 02153b19-6cb9-4971-b653-cce f5e043a84 value: VM , sharedLocks= ] 2014-01-23 18:30:38,644 INFO [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] (org.ovirt.thread.pool-6-thread-39) [59b16880] FINISH, DestroyVmVDSCommand, return: Down, log id: 77d621e9 2014-01-23 18:30:38,664 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-39) [59b16880] Correlation ID: 6940f4da, Job ID: ad0a096e-9c41-4ee2-bcef-323ee70 57ed3, Call Stack: null, Custom Event ID: -1, Message: VM 33vm powered off by admin (Host: gold-vdsd.qa.lab.tlv.redhat.com). Setting target release to current version for consideration and review. please do not push non-RFE bugs to an undefined target release to make sure bugs are reviewed for relevancy, fix, closure, etc. 3.4.0-0.7.beta2.el6. fixed. this is an automated message: moving to Closed CURRENT RELEASE since oVirt 3.4.0 has been released |
Created attachment 854468 [details] vdsm and engine logs Description of problem: New files created inside the guest OS *after* a vm has been set to stateless are preserved after powering the vm off and back on. When powering the vm back on the disks remain locked for a few seconds and new active leaf volumes appear to be created. Version-Release number of selected component (if applicable): ovirt-engine-3.4.0-0.5.beta1.el6.noarch How reproducible: 100% Steps to Reproduce: 1. Create vm and install OS on it 2. Power vm of and set it to stateless 3. Power on and create a new file 4. power the vm off and back on Actual results: New file should not persist after powering the vm off and back on due to stateless setting Expected results: New file is still there after power-cycle of the vm Additional info: vdsm + engine logs attached