Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 911209 - vdsm: vm's sent with wipe after delete in NFS storage will not be removed from domain
vdsm: vm's sent with wipe after delete in NFS storage will not be removed fro...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
unspecified
x86_64 Linux
unspecified Severity high
: ---
: 3.2.0
Assigned To: Eduardo Warszawski
Elad
storage
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-02-14 10:28 EST by Dafna Ron
Modified: 2016-02-10 12:17 EST (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
--no tech note required
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-06-10 16:40:34 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
scohen: Triaged+


Attachments (Terms of Use)
logs (610.91 KB, application/x-gzip)
2013-02-14 10:28 EST, Dafna Ron
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 12404 None None None Never
Red Hat Product Errata RHSA-2013:0886 normal SHIPPED_LIVE Moderate: rhev 3.2 - vdsm security and bug fix update 2013-06-10 20:25:02 EDT

  None (edit)
Description Dafna Ron 2013-02-14 10:28:45 EST
Created attachment 697257 [details]
logs

Description of problem:

I imported a vm that was created in iscsi storage with wipe=true to NFS domain. 
when I tried to remove the vm from the NFS domain we get an error on vdsm and the image is not removed from the domain. 

Version-Release number of selected component (if applicable):

vdsm-4.10.2-1.4.el6.x86_64

How reproducible:

100%

Steps to Reproduce:
1. create a vm in iscsi DC with wipe=true
2. export the vm 
3. import the vm to NFS DC
4. remove the vm from the setup
  
Actual results:

we are failing to remove the image

Expected results:

we should remove the image

Additional info: logs

ageDomain.zeroImage of <storage.nfsSD.NfsStorageDomain instance at 0x7fceb025b290>> (args: ('36c9b553-5da3-483c-8f29-7b60880c1548', 'dd474627-6829-4375-bd39-3c7d06389789', ['6354af8a-20b1-4f80-8890-d5cf9c689b90']) kwargs: {}) callback N
one
dac9423a-4cdf-40fa-88f4-8559fbd64da9::ERROR::2013-02-14 14:53:37,801::task::833::TaskManager.Task::(_setError) Task=`dac9423a-4cdf-40fa-88f4-8559fbd64da9`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 840, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/task.py", line 307, in run
    return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/share/vdsm/storage/fileSD.py", line 354, in zeroImage
    "fileSD %s should not be zeroed." % (imgUUID, sdUUID))
SourceImageActionError: Error during source image manipulation: 'image=dd474627-6829-4375-bd39-3c7d06389789, source domain=36c9b553-5da3-483c-8f29-7b60880c1548: image dd474627-6829-4375-bd39-3c7d06389789 on a fileSD 36c9b553-5da3-483c-8
f29-7b60880c1548 should not be zeroed.'


engine: 

2013-02-14 16:45:04,728 INFO  [org.ovirt.engine.core.bll.RemoveImageCommand] (pool-3-thread-46) [66f7bf50] Running command: RemoveImageCommand internal: true. Entities affected :  ID: 00000000-0000-0000-0000-000000000000 Type: Storage
2013-02-14 16:45:04,752 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-3-thread-46) [66f7bf50] START, DeleteImageGroupVDSCommand( storagePoolId = 851d32be-c533-4655-bd23-4157fcd9e548, ignoreFailoverLi
mit = false, compatabilityVersion = 3.1, storageDomainId = 36c9b553-5da3-483c-8f29-7b60880c1548, imageGroupId = dd474627-6829-4375-bd39-3c7d06389789, postZeros = true, forceDelete = false), log id: 7bb3205b

2013-02-14 16:45:13,167 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-4) [1468c4bc] Error code SourceImageActionError and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Error during source image manipulation
2013-02-14 16:45:13,167 INFO  [org.ovirt.engine.core.bll.SPMAsyncTask] (QuartzScheduler_Worker-4) [1468c4bc] SPMAsyncTask::PollTask: Polling task dac9423a-4cdf-40fa-88f4-8559fbd64da9 (Parent Command RemoveVm, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'.
2013-02-14 16:45:13,385 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (QuartzScheduler_Worker-4) [1468c4bc] BaseAsyncTask::LogEndTaskFailure: Task dac9423a-4cdf-40fa-88f4-8559fbd64da9 (Parent Command RemoveVm, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:
-- Result: cleanSuccess
-- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Error during source image manipulation,
-- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Error during source image manipulation
2013-02-14 16:45:13,386 INFO  [org.ovirt.engine.core.bll.EntityAsyncTask] (QuartzScheduler_Worker-4) [1468c4bc] EntityAsyncTask::EndActionIfNecessary: All tasks of entity 15ad8b97-5b6b-4f41-bf94-693625b065cb has ended -> executing EndAction


image on the domain after the remove: 

[root@orion images]# ls -l
total 4
drwxr-xr-x 2 vdsm kvm 4096 Feb 14 16:42 dd474627-6829-4375-bd39-3c7d06389789
[root@orion images]# pwd
/export/Dafna/data/36c9b553-5da3-483c-8f29-7b60880c1548/images
[root@orion images]#
Comment 3 Eduardo Warszawski 2013-02-20 11:40:52 EST
The engine should reset the wipe after delete when creating (copying, export or import) a disk into a NFS domain.
IMHO if the disk is imported to a block domain the flag should be set by the user.

Anyway engine should not sent zero image flag when deleting images on fileSD's.

Workaround:
Edit the disk in the fileSD, reset the wipe after delete flag, before deleting the disk/vm.
Comment 4 Allon Mureinik 2013-02-21 03:04:57 EST
The flag should be reset when importing, moving to engine-backend.
Comment 5 Eduardo Warszawski 2013-02-21 03:39:34 EST
If the flag will be reseted on import is non sense to flag it in the export.
This is considering that exports are fileSD's only.
Comment 6 Daniel Erez 2013-02-25 06:26:17 EST
As discussed, should be fixed in VDSM side (by ignoring the value instead of throwing an exception).
Comment 9 Eduardo Warszawski 2013-02-25 07:28:07 EST
(In reply to comment #6)
> As discussed, should be fixed in VDSM side (by ignoring the value instead of
> throwing an exception).

Remark: ignoring parameter is very bad but we will continue to do that because fixing the engine is very hard until the day of a real fix.
Comment 10 Eduardo Warszawski 2013-02-25 08:59:24 EST
http://gerrit.ovirt.org/12404
Comment 13 Eduardo Warszawski 2013-02-28 01:40:46 EST
Shu ming asked:
> Can you update the bugzilla to explain the postZero flag? Is it a flag passed from engine to HSM? or a native property in the image or volume?

The engine disk wipe property is traduced to the postZero flag when engine invokes deleteImage on vdsm.

On block domains postZero means that all the volumes that compone the image will be overwrited with 0's once before the volume is removed.
The meaning is that a new VM that uses the same blocks can't directly read the previous contents.

On NFS domains the server prevents the old content to be presented to the user.
Then was decided that the zero operation will not be supported by vdsm on fileSDs.

In order to avoid scenarios like the described in this bug, vdsm was required to ignore this input flag during a deleteImage operation on fileSD's, silently removing the files without zero them before.
Comment 18 Cheryn Tan 2013-04-03 03:01:33 EDT
This bug is currently attached to errata RHBA-2012:14332. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.

* Consequence: What happens when the bug presents.

* Fix: What was done to fix the bug.

* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:

https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes

Thanks in advance.
Comment 19 Elad 2013-04-04 04:34:06 EDT
Checked on RHEVM-3.2 - SF12:
rhevm-3.2.0-10.17.master.el6ev.noarch
vdsm-4.10.2-13.0.el6ev.x86_64
libvirt-0.10.2-18.el6_4.2.x86_64
qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64

Image with wipe=true removed from setup after vm was imported to the NFS DC.
Comment 20 Eduardo Warszawski 2013-04-08 06:52:21 EDT
--no tech note required
Comment 22 errata-xmlrpc 2013-06-10 16:40:34 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0886.html

Note You need to log in before you can comment on or make changes to this bug.