Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1454425 - No meaningful vdsm logging for 'wipe-after-delete' in RHV 4.1
No meaningful vdsm logging for 'wipe-after-delete' in RHV 4.1
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
4.1.1
Unspecified Linux
medium Severity high
: ovirt-4.2.0
: ---
Assigned To: Nir Soffer
Avihai
: ZStream
Depends On:
Blocks: 1425219 1457739
  Show dependency treegraph
 
Reported: 2017-05-22 13:20 EDT by Gordon Watson
Modified: 2018-05-15 13:53 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
: 1457739 (view as bug list)
Environment:
Last Closed: 2018-05-15 13:51:57 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 69782 master MERGED blockdev: Introduce the blockdev module 2017-05-29 17:36 EDT
Red Hat Product Errata RHEA-2018:1489 None None None 2018-05-15 13:53 EDT

  None (edit)
Description Gordon Watson 2017-05-22 13:20:41 EDT
Description of problem:

Both 'execCmd()' and 'watchCmd()' in '/usr/lib/python2.7/site-packages/vdsm/commands.py' use debug logging to display the command line.  

We've seen that it can be important to determine if the 'dd' command that performs a 'wipe-after-delete' completes or not, e.g. in BZ 1451423. However, as 'debug' is not the default logging option in RHV 4.1, we cannot tell if/when a 'wipe-after-delete' started and/or completed.


Version-Release number of selected component (if applicable):

RHV 4.1


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 3 Allon Mureinik 2017-05-25 10:27:13 EDT
Offhand, I'm fine with logging this at INFO level.
Nir - your two cents?
Comment 5 Idan Shaby 2017-05-28 00:40:50 EDT
Nir is already working on it in this patch:
https://gerrit.ovirt.org/#/c/69782/18
Comment 6 Nir Soffer 2017-05-28 05:09:27 EDT
(In reply to Allon Mureinik from comment #3)
> Offhand, I'm fine with logging this at INFO level.
> Nir - your two cents?

I agree, zeroing volumes must be logged using INFO level.

Patch https://gerrit.ovirt.org/69782/ is too big for 4.1, I'll post a minimal patch
adding logging.
Comment 7 Nir Soffer 2017-05-28 07:10:35 EDT
Example logs with https://gerrit.ovirt.org/69782:


Delete volume after live merge:

1. Starting delete volume task

2017-05-28 13:08:35,910+0300 INFO  (jsonrpc/5) [vdsm.api] START deleteVolume(sdUUID=u'aed577ea-d1ca-4ebe-af80-f852c7ce59bb', spUUID=u'6c99f4e5-8588-46f5-a818-e11151c1d19c', imgUUID=u'472dc23

e-b2eb-45ef-967b-5749b8c0ca5c', volumes=[u'c21d1695-8075-4ccb-87c2-37d1920d5990'], postZero=u'true', force=u'false', discard=False) from=::ffff:10.35.0.111,45886 (api:46)
2017-05-28 13:08:35,938+0300 INFO  (jsonrpc/5) [vdsm.api] FINISH deleteVolume return=None from=::ffff:10.35.0.111,45886 (api:52)

2. Actual delete volume task

2017-05-28 13:08:35,959+0300 INFO  (tasks/2) [storage.ThreadPool.WorkerThread] START task 1e5f86cc-a483-439b-92fd-704fe800abdb (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x2372a70>>, args=None) (threadPool:208)

2017-05-28 13:08:36,516+0300 INFO  (tasks/2) [storage.blockdev] Zeroing device /rhev/data-center/mnt/blockSD/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/images/472dc23e-b2eb-45ef-967b-5749b8c0ca5c/c21d1695-8075-4ccb-87c2-37d1920d5990 (size=1073741824) (blockdev:81)

2017-05-28 13:08:52,629+0300 INFO  (tasks/2) [storage.blockdev] Zero device /rhev/data-center/mnt/blockSD/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/images/472dc23e-b2eb-45ef-967b-5749b8c0ca5c/c21d1695-8075-4ccb-87c2-37d1920d5990: 16.12 seconds (utils:705)

2017-05-28 13:08:52,945+0300 INFO  (tasks/2) [storage.LVM] Removing LVs (vg=aed577ea-d1ca-4ebe-af80-f852c7ce59bb, lvs=[u'c21d1695-8075-4ccb-87c2-37d1920d5990']) (lvm:1185)

2017-05-28 13:08:53,014+0300 INFO  (tasks/2) [storage.ThreadPool.WorkerThread] FINISH task 1e5f86cc-a483-439b-92fd-704fe800abdb (threadPool:210)


Remove disk:

1. Starting task

2017-05-28 13:13:37,806+0300 INFO  (jsonrpc/3) [vdsm.api] START deleteImage(sdUUID=u'aed577ea-d1ca-4ebe-af80-f852c7ce59bb', spUUID=u'6c99f4e5-8588-46f5-a818-e11151c1d19c', imgUUID=u'9f479802-e8a0-4fec-a626-9cfd48e7efaa', postZero=u'true', force=u'false', discard=False) from=::ffff:10.35.0.111,45886 (api:46)

2017-05-28 13:13:37,900+0300 INFO  (jsonrpc/3) [vdsm.api] FINISH deleteImage return=None from=::ffff:10.35.0.111,45886 (api:52)

2. Actual delete task

2017-05-28 13:13:37,918+0300 INFO  (tasks/2) [storage.ThreadPool.WorkerThread] START task 89af01ef-c2d9-41c4-803a-e3ee3e658a82 (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x37f4290>>, args=None) (threadPool:208)

2017-05-28 13:13:38,229+0300 INFO  (tmap/0) [storage.blockdev] Zeroing device /dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/df894953-7510-4f55-bce5-06dae39bc448 (size=1073741824) (blockdev:81)

2017-05-28 13:13:55,749+0300 INFO  (tmap/0) [storage.blockdev] Zero device /dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/df894953-7510-4f55-bce5-06dae39bc448: 17.52 seconds (utils:705)

2017-05-28 13:13:55,749+0300 INFO  (tmap/0) [storage.LVM] Removing LVs (vg=aed577ea-d1ca-4ebe-af80-f852c7ce59bb, lvs=['df894953-7510-4f55-bce5-06dae39bc448']) (lvm:1185)

2017-05-28 13:13:55,749+0300 WARN  (tmap/0) [storage.LVM] Removing active volume aed577ea-d1ca-4ebe-af80-f852c7ce59bb/df894953-7510-4f55-bce5-06dae39bc448 (lvm:1192)

2017-05-28 13:13:55,853+0300 INFO  (tasks/2) [storage.ThreadPool.WorkerThread] FINISH task 89af01ef-c2d9-41c4-803a-e3ee3e658a82 (threadPool:210)
Comment 8 Nir Soffer 2017-05-29 19:01:19 EDT
Moving back to POST since we need to backport this to 4.1.3.
Comment 10 Nir Soffer 2017-06-01 05:52:44 EDT
This is already fixed in 4.2.
Comment 11 Avihai 2017-07-05 05:06:34 EDT
Hi Alon ,

Issue is verified in 4.2 latest which is currently:

Engine:
4.2.0-0.0.master.20170702100738.git46a9f67.el7.centos

VDSM:
4.20.1-120.git28558d7

The NEEDINFO is about 4.1 , should it be also merged on 4.1 ?
If so I'll wait till its merge & recheck on 4.1 , if not I'll move to VERIFIED.
Comment 12 Nir Soffer 2017-07-05 07:41:26 EDT
(In reply to Avihai from comment #11)
> The NEEDINFO is about 4.1 , should it be also merged on 4.1 ?
> If so I'll wait till its merge & recheck on 4.1 , if not I'll move to
> VERIFIED.

The fix was backported to 4.1, you can verify the 4.1 bug.
Comment 17 errata-xmlrpc 2018-05-15 13:51:57 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1489

Note You need to log in before you can comment on or make changes to this bug.