+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1454425 +++ ====================================================================== Description of problem: Both 'execCmd()' and 'watchCmd()' in '/usr/lib/python2.7/site-packages/vdsm/commands.py' use debug logging to display the command line. We've seen that it can be important to determine if the 'dd' command that performs a 'wipe-after-delete' completes or not, e.g. in BZ 1451423. However, as 'debug' is not the default logging option in RHV 4.1, we cannot tell if/when a 'wipe-after-delete' started and/or completed. Version-Release number of selected component (if applicable): RHV 4.1 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: (Originally by Gordon Watson)
Offhand, I'm fine with logging this at INFO level. Nir - your two cents? (Originally by Allon Mureinik)
Nir is already working on it in this patch: https://gerrit.ovirt.org/#/c/69782/18 (Originally by Idan Shaby)
(In reply to Allon Mureinik from comment #3) > Offhand, I'm fine with logging this at INFO level. > Nir - your two cents? I agree, zeroing volumes must be logged using INFO level. Patch https://gerrit.ovirt.org/69782/ is too big for 4.1, I'll post a minimal patch adding logging. (Originally by Nir Soffer)
Example logs with https://gerrit.ovirt.org/69782: Delete volume after live merge: 1. Starting delete volume task 2017-05-28 13:08:35,910+0300 INFO (jsonrpc/5) [vdsm.api] START deleteVolume(sdUUID=u'aed577ea-d1ca-4ebe-af80-f852c7ce59bb', spUUID=u'6c99f4e5-8588-46f5-a818-e11151c1d19c', imgUUID=u'472dc23 e-b2eb-45ef-967b-5749b8c0ca5c', volumes=[u'c21d1695-8075-4ccb-87c2-37d1920d5990'], postZero=u'true', force=u'false', discard=False) from=::ffff:10.35.0.111,45886 (api:46) 2017-05-28 13:08:35,938+0300 INFO (jsonrpc/5) [vdsm.api] FINISH deleteVolume return=None from=::ffff:10.35.0.111,45886 (api:52) 2. Actual delete volume task 2017-05-28 13:08:35,959+0300 INFO (tasks/2) [storage.ThreadPool.WorkerThread] START task 1e5f86cc-a483-439b-92fd-704fe800abdb (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x2372a70>>, args=None) (threadPool:208) 2017-05-28 13:08:36,516+0300 INFO (tasks/2) [storage.blockdev] Zeroing device /rhev/data-center/mnt/blockSD/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/images/472dc23e-b2eb-45ef-967b-5749b8c0ca5c/c21d1695-8075-4ccb-87c2-37d1920d5990 (size=1073741824) (blockdev:81) 2017-05-28 13:08:52,629+0300 INFO (tasks/2) [storage.blockdev] Zero device /rhev/data-center/mnt/blockSD/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/images/472dc23e-b2eb-45ef-967b-5749b8c0ca5c/c21d1695-8075-4ccb-87c2-37d1920d5990: 16.12 seconds (utils:705) 2017-05-28 13:08:52,945+0300 INFO (tasks/2) [storage.LVM] Removing LVs (vg=aed577ea-d1ca-4ebe-af80-f852c7ce59bb, lvs=[u'c21d1695-8075-4ccb-87c2-37d1920d5990']) (lvm:1185) 2017-05-28 13:08:53,014+0300 INFO (tasks/2) [storage.ThreadPool.WorkerThread] FINISH task 1e5f86cc-a483-439b-92fd-704fe800abdb (threadPool:210) Remove disk: 1. Starting task 2017-05-28 13:13:37,806+0300 INFO (jsonrpc/3) [vdsm.api] START deleteImage(sdUUID=u'aed577ea-d1ca-4ebe-af80-f852c7ce59bb', spUUID=u'6c99f4e5-8588-46f5-a818-e11151c1d19c', imgUUID=u'9f479802-e8a0-4fec-a626-9cfd48e7efaa', postZero=u'true', force=u'false', discard=False) from=::ffff:10.35.0.111,45886 (api:46) 2017-05-28 13:13:37,900+0300 INFO (jsonrpc/3) [vdsm.api] FINISH deleteImage return=None from=::ffff:10.35.0.111,45886 (api:52) 2. Actual delete task 2017-05-28 13:13:37,918+0300 INFO (tasks/2) [storage.ThreadPool.WorkerThread] START task 89af01ef-c2d9-41c4-803a-e3ee3e658a82 (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x37f4290>>, args=None) (threadPool:208) 2017-05-28 13:13:38,229+0300 INFO (tmap/0) [storage.blockdev] Zeroing device /dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/df894953-7510-4f55-bce5-06dae39bc448 (size=1073741824) (blockdev:81) 2017-05-28 13:13:55,749+0300 INFO (tmap/0) [storage.blockdev] Zero device /dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/df894953-7510-4f55-bce5-06dae39bc448: 17.52 seconds (utils:705) 2017-05-28 13:13:55,749+0300 INFO (tmap/0) [storage.LVM] Removing LVs (vg=aed577ea-d1ca-4ebe-af80-f852c7ce59bb, lvs=['df894953-7510-4f55-bce5-06dae39bc448']) (lvm:1185) 2017-05-28 13:13:55,749+0300 WARN (tmap/0) [storage.LVM] Removing active volume aed577ea-d1ca-4ebe-af80-f852c7ce59bb/df894953-7510-4f55-bce5-06dae39bc448 (lvm:1192) 2017-05-28 13:13:55,853+0300 INFO (tasks/2) [storage.ThreadPool.WorkerThread] FINISH task 89af01ef-c2d9-41c4-803a-e3ee3e658a82 (threadPool:210) (Originally by Nir Soffer)
Moving back to POST since we need to backport this to 4.1.3. (Originally by Nir Soffer)
Backporting the fix from 4.2 is not possible, depends on too much new modules. Will post 4.1 only patch adding logging instead.
Backporting https://gerrit.ovirt.org/77631 can be good idea, since we would like to have this infrastructure in 4.1 anyway. So I would add only the logging part from https://gerrit.ovirt.org/77633 to the old code, in 3 places (since we have duplicate code in the old code).
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Tag 'v4.19.21' doesn't contain patch 'https://gerrit.ovirt.org/78074'] gitweb: https://gerrit.ovirt.org/gitweb?p=vdsm.git;a=shortlog;h=refs/tags/v4.19.21 For more info please contact: rhv-devops
This patch was included in 4.19.20: amureini@mureinik ~/src/git/vdsm [master] $ git log -n1 60a1d10b8128ec2b0c71fa586874328a82f5081d commit 60a1d10b8128ec2b0c71fa586874328a82f5081d Author: Idan Shaby <ishaby> Date: Mon Jun 12 11:13:17 2017 +0300 storage: volume zeroing logs This patch adds info logs before and after zeroing volumes. Change-Id: Ia73cd48399efbcb00bdc6637936db80d299cedb2 Bug-Url: https://bugzilla.redhat.com/1457739 Signed-off-by: Idan Shaby <ishaby> amureini@mureinik ~/src/git/vdsm [master] $ git tag --contains 60a1d10b8128ec2b0c71fa586874328a82f5081d v4.19.20 v4.19.21
I've added it to the ET, it should move automatically to ON_QA soon, if not we'll move it shortly to ON_QA manually.
verified . Engine: 4.1.4-0.2.el7 VDSM: 4.19.20-1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1815