Bug 1518676 - Entire vdsm process hang during when formatting xlease volume on NFS storage domain
Summary: Entire vdsm process hang during when formatting xlease volume on NFS storage ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.1.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.2.1
: ---
Assignee: Nir Soffer
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On:
Blocks: 1527827
TreeView+ depends on / blocked
 
Reported: 2017-11-29 12:57 UTC by Marian Jankular
Modified: 2021-03-11 18:48 UTC (History)
12 users (show)

Fixed In Version: vdsm v4.20.13
Doc Type: Bug Fix
Doc Text:
Previously, VDSM was accessing NFS storage directly when performing lease operations. A bug in Python could cause the entire VDSM process to hang if the NFS storage was not responsive. VDSM could become unkillable (D state) for many hours, until the host was rebooted. Now, VDSM uses an external process to access NFS storage, so it can continue to function correctly even if the NFS storage becomes non-responsive, and can be restarted if needed.
Clone Of:
: 1527827 (view as bug list)
Environment:
Last Closed: 2018-05-15 17:52:46 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)
Script reproducing this issue (2.27 KB, text/plain)
2017-11-30 17:35 UTC, Nir Soffer
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1520068 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Bugzilla 1530072 0 unspecified CLOSED Vdsm can get into D state when checking disk type on non-responsive NFS server 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2018:1489 0 None None None 2018-05-15 17:53:52 UTC
oVirt gerrit 85026 0 'None' MERGED xlease: Simplify DirectFile interface 2020-10-30 07:35:09 UTC
oVirt gerrit 85027 0 'None' MERGED xlease: Simplify DirectFile interface 2020-10-30 07:35:09 UTC
oVirt gerrit 85081 0 'None' MERGED tests: Add tests for xlease.DirectFile 2020-10-30 07:35:09 UTC
oVirt gerrit 85082 0 'None' MERGED xlease: Add InterruptibleDirectFile class 2020-10-30 07:35:09 UTC
oVirt gerrit 85121 0 'None' MERGED sd: Make external_leases_volume an instance method 2020-10-30 07:35:09 UTC
oVirt gerrit 85122 0 'None' MERGED fileSD: Use InterruptibleDirectFile backend 2020-10-30 07:35:26 UTC
oVirt gerrit 85124 0 'None' MERGED sd: Extract external_leases_backend class method 2020-10-30 07:35:11 UTC
oVirt gerrit 85220 0 'None' MERGED spec: Require python fix for hanging all threads 2020-10-30 07:35:11 UTC

Internal Links: 1520068 1530072

Description Marian Jankular 2017-11-29 12:57:53 UTC
Description of problem:
SPM host gets "heartbeat exceeded" after attempt to add nfs with wrong permission

Version-Release number of selected component (if applicable):
rhv 4.1.7

How reproducible:


Steps to Reproduce:
1. create nfs share, do not set 36:36 as owners
2. try to add it to rhev


Actual results:
heartbeat exceeded message from spm host

Expected results:
warning about permission, however spm will continue working

Additional info:

Comment 2 Allon Mureinik 2017-11-29 13:03:20 UTC
Marian - can we get the log from the SPM attached here too please?

Comment 3 Allon Mureinik 2017-11-29 13:03:50 UTC
Piotr/Nir, does this make any sense to you?

Comment 5 Marian Jankular 2017-11-29 13:14:54 UTC
added

Comment 6 Piotr Kliczewski 2017-11-29 14:47:43 UTC
Based on the logs I see that the last verb received by vdsm was:

2017-11-27 14:36:58,956+0300 INFO  (jsonrpc/3) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'b129f735-3ca4-443d-b921-d875d214a79f', domainName=u'ExportData', typeSpecificArg=u'10.201.3.5:/volrhevexport', domClass=1, domVersion=u'4', options=None) from=::ffff:10.201.3.10,41412, flow_id=52ede2ae, task_id=3e1c51ef-0c93-4d29-bac9-45dab967121d (api:46)

At 2017-11-27 14:36:59,939+0300 there was last log before vdsm was killed by sanlock.

Heartbeat exceeded occurred at 2017-11-27 14:37:23,391+03.

Interesting is that when vdsm started again the port is used:

error: [Errno 98] Address already in use

In following situation hearbeat exceeded is expected with current code base. The question is why vdsm had to be killed.

Comment 7 Nir Soffer 2017-11-29 15:41:34 UTC
(In reply to Piotr Kliczewski from comment #6)
> error: [Errno 98] Address already in use

Strange, we see this error in OST, when trying to start ovirt-imageio-proxy.
Maybe platform changed the behavior when processes are terminated/killed?

> In following situation hearbeat exceeded is expected with current code base.
> The question is why vdsm had to be killed.

If you look in sanlock log you will find the answer.

Basically, if you hold a storage lease (the spm holds the spm lease), and sanlock
failed to update the lease for 80 seconds, it will terminate you. If you fail 
to terminate after 40 seconds, it will kill you.

Comment 8 Yaniv Kaul 2017-11-29 18:52:08 UTC
(In reply to Nir Soffer from comment #7)
> (In reply to Piotr Kliczewski from comment #6)
> > error: [Errno 98] Address already in use
> 
> Strange, we see this error in OST, when trying to start ovirt-imageio-proxy.
> Maybe platform changed the behavior when processes are terminated/killed?
> 
> > In following situation hearbeat exceeded is expected with current code base.
> > The question is why vdsm had to be killed.
> 
> If you look in sanlock log you will find the answer.
> 
> Basically, if you hold a storage lease (the spm holds the spm lease), and
> sanlock
> failed to update the lease for 80 seconds, it will terminate you. If you
> fail 
> to terminate after 40 seconds, it will kill you.

Do we have any idea why it'd fail to update the lease? The NFS issue seems unrelateD?

Comment 9 Nir Soffer 2017-11-29 19:23:52 UTC
Martin, can you add sanlock log to the bug?

Comment 10 Nir Soffer 2017-11-29 21:03:58 UTC
(In reply to Yaniv Kaul from comment #8)
> Do we have any idea why it'd fail to update the lease? The NFS issue seems
> unrelateD?

We don't have sanlock log, so we don't have any info yet. Sanlock killing vdsm
was only one possible reason.

Looking in vdsm log, we don't see evidence that sanlock killed vdsm, we expect to
see message about signal 15, and clean shutdown.

It looks like entire vdsm process was hang when writing formatting
the xlease volume:

1. Connecting to NFS server

2017-11-27 14:36:57,941+0300 INFO  (jsonrpc/5) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'591adb1c-1e9f-4da6-afa2-c35
9345468b1', u'connection': u'10.201.3.5:/volrhevexport', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=:
:ffff:10.201.3.10,41412, flow_id=52ede2ae, task_id=f06fa853-eb54-4b99-89c1-37e547e631da (api:46)
2017-11-27 14:36:57,947+0300 INFO  (jsonrpc/5) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'591adb1c-1e9f-4da6-afa2-c359345468b1'}]} from=::ffff:10.201
.3.10,41412, flow_id=52ede2ae, task_id=f06fa853-eb54-4b99-89c1-37e547e631da (api:52)
2017-11-27 14:36:57,948+0300 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:539)

2. repoStats reporting that all domains are ok

2017-11-27 14:36:58,336+0300 INFO  (periodic/4) [vdsm.api] FINISH repoStats return={u'59bc3ccb-835a-41de-9ce7-479c88bc1d14': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000395123', 'lastCheck': '2.0', 'valid': True}, u'3bc91989-cb00-4f4f-81e6-3bf05ad29e60': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000381345', 'lastCheck': '6.6', 'valid': True}, u'7ddec480-92a0-405a-a549-f9065f8f81ef': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000202717', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=89f8f55d-0e5e-4fbd-818c-62e7fcdf4e96 (api:52)

3. Starting to create storage domain

2017-11-27 14:36:58,956+0300 INFO  (jsonrpc/3) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'b129f735-3ca4-443d-b921-d875d214a79f', domainName=u'ExportData', typeSpecificArg=u'10.201.3.5:/volrhevexport', domClass=1, domVersion=u'4', options=None) from=::ffff:10.201.3.10,41412, flow_id=52ede2ae, task_id=3e1c51ef-0c93-4d29-bac9-45dab967121d (api:46)

4. Formatting the special xlease volume

2017-11-27 14:36:59,939+0300 INFO  (jsonrpc/3) [storage.xlease] Formatting index for lockspace u'b129f735-3ca4-443d-b921-d875d214a79f' (version=1) (xlease:647)


    [No logs after this call for 5 minutes and 42 seconds] 


5. Vdsm starts

2017-11-27 14:42:42,235+0300 INFO  (MainThread) [vds] (PID: 173575) I am the actual vdsm 4.19.36-1.el7ev nimbus02.int.teb.com.tr (3.10.0-693.5.2.el7.x86_64) (vdsm:145)


So it looks like entire vdsm was hang while writing the xlease volume index.

Formatting the index include few seek, write and fsync calls.
If the storage was not accessible during these operations, these
calls can hang, but they cannot cause the entire process to hang,
unless there is a bug in the underlying packages.

Looking in upstream python, both io.FileIO() used for writing
and seeking, and os.fsync() look correct, they release the GIL
during the blocking call so they cannot cause the python process
to hang.

We need to check the RHEL python code, need to find where the 
code is hidden.

The related vdsm vdsm code did not change since 4.1.0. Same code
is used in ovirt-imageio-daemon and in vdsm kvm2ovirt.

However the chance that something else caused this hang exactly
when we started format the xlease volume is very low, so we
need to investigate this.

Raz, can you reproduce such hangs with NFS storage?

To reproduce, I think we need to use the API/SDK:
1. connect to NFS storage
2. block access to storage using iptables
3. create storage domain version 4 on the NFS server
4. unblock access to storage after a minute.

We expect the operation to succeed after unblocking
access to storage.

Repeat this many times, and check if we got holes in vdsm
log after this message:

    [storage.xlease] Formatting index for lockspace ...

Comment 11 Nir Soffer 2017-11-29 23:20:42 UTC
Raz, when you try to reproduce this, please set the "root" and "storage" log
logger log level to DEBUG.

In DEBUG mode we have periodic logs from the monitoring system, and it is very
easy to see if vdsm is hang. If 10 seconds pass and there is no new log, vdsm
was hang.

Comment 14 Marian Jankular 2017-11-30 09:37:46 UTC
customer did reply that nfs share was prepared as it is stated in the documentation

Comment 15 Nir Soffer 2017-11-30 16:54:33 UTC
Marian, can you attach also /var/log/messages? if the issue is indeed blocked
python process, we should a traceback about it in the system log.

Comment 16 Nir Soffer 2017-11-30 17:34:27 UTC
I reproduced entire process hang when using io.FileIO (what we use in vdsm
and ovirt-imageio for doing direct I/O.

Here is the log showing this issue:

# python fileio_nfs_test.py mnt/fileio.out dumbo.tlv.redhat.com
2017-11-30 18:41:49,159 - (MainThread) - pid=3436
2017-11-30 18:41:49,159 - (MainThread) - Opening mnt/fileio.out
2017-11-30 18:41:49,160 - (MainThread) - OK fd=3
2017-11-30 18:41:49,161 - (MainThread) - Starting canary thread
2017-11-30 18:41:49,161 - (Canary) - Blocking access to storage
2017-11-30 18:41:49,169 - (Canary) - If this test is hang, please run: iptables -D OUTPUT -p tcp -d dumbo.tlv.redhat.com --dport 2049 -j DROP
2017-11-30 18:41:49,169 - (Canary) - check 0
2017-11-30 18:41:49,169 - (MainThread) - Waiting until storage is blocked...
2017-11-30 18:41:50,170 - (Canary) - check 1
2017-11-30 18:41:51,170 - (Canary) - check 2
2017-11-30 18:41:52,171 - (Canary) - check 3
2017-11-30 18:41:53,171 - (Canary) - check 4
2017-11-30 18:41:54,172 - (Canary) - check 5
2017-11-30 18:41:55,172 - (Canary) - check 6
2017-11-30 18:41:56,172 - (Canary) - check 7
2017-11-30 18:41:57,173 - (Canary) - check 8
2017-11-30 18:41:58,173 - (Canary) - check 9
2017-11-30 18:41:59,174 - (MainThread) - Opening io.FileIO

Everything is hang now!

After some time I run this from another shell:
 iptables -D OUTPUT -p tcp -d dumbo.tlv.redhat.com --dport 2049 -j DROP

And now the script is unblocked and finish.

2017-11-30 18:45:29,683 - (MainThread) - OK
2017-11-30 18:45:29,684 - (MainThread) - Creating mmap
2017-11-30 18:45:29,684 - (Canary) - check 10
2017-11-30 18:45:29,684 - (MainThread) - OK
2017-11-30 18:45:29,685 - (MainThread) - Filling mmap
2017-11-30 18:45:29,685 - (MainThread) - OK
2017-11-30 18:45:29,685 - (MainThread) - Writing mmap to storage
2017-11-30 18:45:29,719 - (MainThread) - OK
2017-11-30 18:45:29,719 - (MainThread) - Syncing
2017-11-30 18:45:29,719 - (MainThread) - OK
2017-11-30 18:45:29,720 - (MainThread) - Done

We have a canary thread logging every second. Once we tried to open
the FileIO, the canary thread stopped - this is possible only if the io
extension module was holding the GIL during a blocking call.

And here is the backtrace of the hang process in the kernel:

# cat /proc/3436/stack
[<ffffffffc03228e4>] rpc_wait_bit_killable+0x24/0xb0 [sunrpc]
[<ffffffffc03240d4>] __rpc_execute+0x154/0x410 [sunrpc]
[<ffffffffc03257b8>] rpc_execute+0x68/0xb0 [sunrpc]
[<ffffffffc0315776>] rpc_run_task+0xf6/0x150 [sunrpc]
[<ffffffffc07eb8c3>] nfs4_call_sync_sequence+0x63/0xa0 [nfsv4]
[<ffffffffc07ec94c>] _nfs4_proc_getattr+0xcc/0xf0 [nfsv4]
[<ffffffffc07f88f2>] nfs4_proc_getattr+0x72/0xf0 [nfsv4]
[<ffffffffc07b543f>] __nfs_revalidate_inode+0xbf/0x310 [nfs]
[<ffffffffc07b5a55>] nfs_getattr+0x95/0x250 [nfs]
[<ffffffff81206146>] vfs_getattr+0x46/0x80
[<ffffffff812061c5>] vfs_fstat+0x45/0x80
[<ffffffff812068e4>] SYSC_newfstat+0x24/0x60
[<ffffffff81206ade>] SyS_newfstat+0xe/0x10
[<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff

You cannot attach to the process with gdb, since it is in D state, but once
the process is unblocked, gbd takes control, and we see:

Thread 2 (Thread 0x7f97a2ea5700 (LWP 4799)):
#0  0x00007f97ab925a0b in do_futex_wait.constprop.1 () from /lib64/libpthread.so.0
#1  0x00007f97ab925a9f in __new_sem_wait_slow.constprop.0 () from /lib64/libpthread.so.0
#2  0x00007f97ab925b3b in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0
#3  0x00007f97abc455f5 in PyThread_acquire_lock () from /lib64/libpython2.7.so.1.0
#4  0x00007f97abc11156 in PyEval_RestoreThread () from /lib64/libpython2.7.so.1.0
#5  0x00007f97a44f9086 in time_sleep () from /usr/lib64/python2.7/lib-dynload/timemodule.so
#6  0x00007f97abc18bb0 in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#7  0x00007f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
#8  0x00007f97abc183fc in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#9  0x00007f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
#10 0x00007f97abc183fc in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#11 0x00007f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
#12 0x00007f97abba494d in function_call () from /lib64/libpython2.7.so.1.0
#13 0x00007f97abb7f9a3 in PyObject_Call () from /lib64/libpython2.7.so.1.0
#14 0x00007f97abc135bd in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#15 0x00007f97abc1857d in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#16 0x00007f97abc1857d in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#17 0x00007f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
#18 0x00007f97abba4858 in function_call () from /lib64/libpython2.7.so.1.0
#19 0x00007f97abb7f9a3 in PyObject_Call () from /lib64/libpython2.7.so.1.0
#20 0x00007f97abb8e995 in instancemethod_call () from /lib64/libpython2.7.so.1.0
#21 0x00007f97abb7f9a3 in PyObject_Call () from /lib64/libpython2.7.so.1.0
#22 0x00007f97abc117b7 in PyEval_CallObjectWithKeywords () from /lib64/libpython2.7.so.1.0
#23 0x00007f97abc496e2 in t_bootstrap () from /lib64/libpython2.7.so.1.0
#24 0x00007f97ab91fe25 in start_thread () from /lib64/libpthread.so.0
#25 0x00007f97aaf4434d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f97ac10d740 (LWP 4798)):
#0  0x00007f97aaf35154 in _fxstat () from /lib64/libc.so.6
#1  0x00007f97a470ad32 in fileio_init () from /usr/lib64/python2.7/lib-dynload/_io.so
#2  0x00007f97abbd565f in type_call () from /lib64/libpython2.7.so.1.0
#3  0x00007f97abb7f9a3 in PyObject_Call () from /lib64/libpython2.7.so.1.0
#4  0x00007f97abc140f6 in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
#5  0x00007f97abc1aefd in PyEval_EvalCodeEx () from /lib64/libpython2.7.so.1.0
Missing separate debuginfos, use: debuginfo-install python-2.7.5-58.el7.x86_64
---Type <return> to continue, or q <return> to quit---
#6  0x00007f97abc1b002 in PyEval_EvalCode () from /lib64/libpython2.7.so.1.0
#7  0x00007f97abc3443f in run_mod () from /lib64/libpython2.7.so.1.0
#8  0x00007f97abc355fe in PyRun_FileExFlags () from /lib64/libpython2.7.so.1.0
#9  0x00007f97abc36889 in PyRun_SimpleFileExFlags () from /lib64/libpython2.7.so.1.0
#10 0x00007f97abc47a3f in Py_Main () from /lib64/libpython2.7.so.1.0
#11 0x00007f97aae6dc05 in __libc_start_main () from /lib64/libc.so.6
#12 0x000000000040071e in _start ()

Looking at python code - there are two helpers in fileio.c that call fstat
without releasing the GIL during the call:

- https://github.com/python/cpython/blob/2.7/Modules/_io/fileio.c#L145
- https://github.com/python/cpython/blob/2.7/Modules/_io/fileio.c#L161

And both helpers are called from fileio_init (the implementation of io.FileIO())

- https://github.com/python/cpython/blob/2.7/Modules/_io/fileio.c#L332
- https://github.com/python/cpython/blob/2.7/Modules/_io/fileio.c#L366

Basically this is a variant of bug 1496517, but this time in core python.

I posted these PR for upstream python:

- https://github.com/python/cpython/pull/4651
- https://github.com/python/cpython/pull/4652

Next step is filing a python upstream bug and Fedora/RHEL bugs.

Comment 17 Nir Soffer 2017-11-30 17:35:26 UTC
Created attachment 1361069 [details]
Script reproducing this issue

Comment 18 Nir Soffer 2017-11-30 18:43:57 UTC
Upstream python bug: https://bugs.python.org/issue32186

Comment 21 Nir Soffer 2017-12-01 15:29:39 UTC
Marian, you sent something like journalctl -b - this is not the log we need.

When I reproduced this locally, I see this in /var/log/messages:

Nov 30 17:58:29 voodoo6 kernel: nfs: server dumbo.tlv.redhat.com not responding, still trying
Nov 30 17:58:51 voodoo6 kernel: INFO: task python:32449 blocked for more than 120 seconds.
Nov 30 17:58:51 voodoo6 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 30 17:58:51 voodoo6 kernel: python          D 0000000000000000     0 32449  21552 0x00000082
Nov 30 17:58:51 voodoo6 kernel: ffff880090b7bb50 0000000000000046 ffff88012fc03f40 ffff880090b7bfd8
Nov 30 17:58:51 voodoo6 kernel: ffff880090b7bfd8 ffff880090b7bfd8 ffff88012fc03f40 ffff880126f4cf90
Nov 30 17:58:51 voodoo6 kernel: ffff880139f40e08 ffff880090b7bb88 ffff880126f4d038 0000000000000000
Nov 30 17:58:51 voodoo6 kernel: Call Trace:
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff816a9599>] schedule+0x29/0x70
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8121cb29>] inode_dio_wait+0xd9/0x100
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff810b19e0>] ? wake_bit_function+0x40/0x40
Nov 30 17:58:51 voodoo6 kernel: [<ffffffffc0829d9b>] nfs4_file_fsync+0x3b/0x150 [nfsv4]
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff81348d51>] ? __percpu_counter_add+0x51/0x70
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8123324b>] vfs_fsync+0x2b/0x40
Nov 30 17:58:51 voodoo6 kernel: [<ffffffffc0829fee>] nfs4_file_flush+0x5e/0x90 [nfsv4]
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff811fe114>] filp_close+0x34/0x80
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8121fe88>] put_files_struct+0x88/0xe0
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8121ff89>] exit_files+0x49/0x50
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8108d9ef>] do_exit+0x2af/0xa40
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff816aa181>] ? wait_for_completion_killable+0x151/0x1a0
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff810c4820>] ? wake_up_state+0x20/0x20
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8108e1ff>] do_group_exit+0x3f/0xa0
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8109e32e>] get_signal_to_deliver+0x1ce/0x5e0
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8102a467>] do_signal+0x57/0x6c0
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff812039d1>] ? __sb_end_write+0x31/0x60
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff8102ab2f>] do_notify_resume+0x5f/0xb0
Nov 30 17:58:51 voodoo6 kernel: [<ffffffff816b533d>] int_signal+0x12/0x17

What we need is /var/log/messages, or the equivalent output of journalctl showing
the same messages.

Comment 22 Nir Soffer 2017-12-01 16:23:56 UTC
Marian, we also expect these messages if vdsm was hang while trying to access 
nfs server:

Dec  1 18:14:53 voodoo6 kernel: nfs: server dumbo.tlv.redhat.com not responding, still trying
Dec  1 18:15:43 voodoo6 kernel: nfs: server dumbo.tlv.redhat.com not responding, timed out
Dec  1 18:18:43 voodoo6 kernel: nfs: server dumbo.tlv.redhat.com not responding, timed out
Dec  1 18:18:48 voodoo6 kernel: nfs: server dumbo.tlv.redhat.com not responding, timed out
...
Dec  1 18:19:34 voodoo6 kernel: nfs: server dumbo.tlv.redhat.com OK

Comment 23 Nir Soffer 2017-12-01 16:36:35 UTC
The message about block process in comment 21 is expected when:
1. nfs server is not responding
2. vdsm was hang trying to open the xlease volume
3. vdsm was killed by systemd during soft-fencing flow

Comment 24 Nir Soffer 2017-12-02 17:42:40 UTC
Marian, sorry, I missed attachment 1361489 [details], checking it.

Comment 25 Nir Soffer 2017-12-04 07:36:27 UTC
Looking /var/log/messages from attachment 1361489 [details], we see:


0. Host boot (10:42:54,000+0300)

    Nov 27 13:42:54 nimbus02 kernel: Command line: BOOT_IMAGE=/rhvh-4.1-0.20171101.0+1/vmlinuz-3.10.0-693.5.2.el7.x86_64 root=/dev/rhvh_nimbus02/rhvh-4.1-0.20171101.0+1 ro crashkernel=auto rd.lvm.lv=rhvh_nimbus02/rhvh-4.1-0.20171101.0+1 rd.lvm.lv=rhvh_nimbus02/swap rhgb quiet LANG=en_US.UTF-8 img.bootid=rhvh-4.1-0.20171101.0+1


1. vdsm thread jsonrcp/3 strating xlease formatting operation

    2017-11-27 14:36:59,939+0300 INFO  (jsonrpc/3) [storage.xlease] Formatting index for lockspace u'b129f735-3ca4-443d-b921-d875d214a79f' (version=1) (xlease:647)


2. kenrel report about hang vdsm monitor thread.

   This thread is block storage monitor thread:

        2017-11-27 14:35:39,658+0300 INFO  (monitor/7ddec48) [storage.LVM] Refreshing LVs (vg=7ddec480-92a0-405a-a549-f9065f8f81ef, lvs=['metadata', 'leases', 'ids', 'inbox', 'outbox', 'xleases', 'master']) (lvm:1319)

   I don't know how it can be stuck on closing file descriptor open on NFS.
   Maybe it inherited the fd from the parrent thread?

    Nov 27 14:40:52 nimbus02 kernel: INFO: task monitor/7ddec48:171835 blocked for more than 120 seconds.
    Nov 27 14:40:52 nimbus02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Nov 27 14:40:52 nimbus02 kernel: monitor/7ddec48 D 0000000000000000     0 171835  78542 0x00000080
    Nov 27 14:40:52 nimbus02 kernel: ffff88434a257e20 0000000000000082 ffff8866ae7ddee0 ffff88434a257fd8
    Nov 27 14:40:52 nimbus02 kernel: ffff88434a257fd8 ffff88434a257fd8 ffff8866ae7ddee0 ffff88391c138e10
    Nov 27 14:40:52 nimbus02 kernel: ffff88407ff80a08 ffff88434a257e58 ffff88391c138eb8 0000000000000000
    Nov 27 14:40:52 nimbus02 kernel: Call Trace:
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff816a9589>] schedule+0x29/0x70
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff8121cca9>] inode_dio_wait+0xd9/0x100
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff810b19e0>] ? wake_bit_function+0x40/0x40
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffffc0960d9b>] nfs4_file_fsync+0x3b/0x150 [nfsv4]
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff812333cb>] vfs_fsync+0x2b/0x40
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffffc0960fee>] nfs4_file_flush+0x5e/0x90 [nfsv4]
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff811fe294>] filp_close+0x34/0x80
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff81220388>] __close_fd+0x78/0xa0
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff811ffd03>] SyS_close+0x23/0x50
    Nov 27 14:40:52 nimbus02 kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b


3. soft-fencing attempt?

    Nov 27 14:41:32 nimbus02 systemd: Created slice User Slice of root.
    Nov 27 14:41:32 nimbus02 systemd: Starting User Slice of root.
    Nov 27 14:41:32 nimbus02 systemd-logind: New session 30 of user root.
    Nov 27 14:41:32 nimbus02 systemd: Started Session 30 of user root.
    Nov 27 14:41:32 nimbus02 systemd: Starting Session 30 of user root.
    Nov 27 14:41:36 nimbus02 systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked.
    Nov 27 14:41:36 nimbus02 systemd: Stopping MOM instance configured for VDSM purposes...
    Nov 27 14:41:46 nimbus02 systemd: mom-vdsm.service stop-sigterm timed out. Killing.
    Nov 27 14:41:46 nimbus02 systemd: mom-vdsm.service: main process exited, code=killed, status=9/KILL
    Nov 27 14:41:46 nimbus02 systemd: Unit mom-vdsm.service entered failed state.
    Nov 27 14:41:46 nimbus02 systemd: mom-vdsm.service failed.
    Nov 27 14:41:46 nimbus02 systemd: Stopping Virtual Desktop Server Manager...
    Nov 27 14:41:57 nimbus02 systemd: vdsmd.service stop-sigterm timed out. Killing.


4. kernel warn about nfs error - not sure what is the meaning of this error.
   the thread having this error is the same thread formatting the lease.
   the error is in write(), which does release the GIL in python.

    Nov 27 14:41:57 nimbus02 kernel: ------------[ cut here ]------------
    Nov 27 14:41:57 nimbus02 kernel: WARNING: CPU: 1 PID: 30694 at fs/nfs/pnfs_nfs.c:789 nfs4_pnfs_ds_connect+0x300/0x3c0 [nfsv4]
    Nov 27 14:41:57 nimbus02 kernel: Modules linked in: nfs_layout_nfsv41_files rpcsec_gss_krb5 nfsv4 dns_resolver vhost_net vhost macvtap macvlan ebt_arp ebtable_nat tun nfsv3 nfs fscache ebtable_filter ebtables ip6table_filter ip6_tables bnx2i libiscsi bnx2fc cnic uio fcoe libfcoe libfc xt_physdev br_netfilter ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport xt_conntrack nf_conntrack iptable_filter vfat fat iTCO_wdt iTCO_vendor_support dcdbas sb_edac edac_core intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass mei_me lpc_ich mei pcspkr sg ipmi_si ipmi_devintf ipmi_msghandler shpchp acpi_power_meter tpm_crb bridge nfsd auth_rpcgss nfs_acl bonding lockd grace ip_tables ext4 mbcache jbd2 dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio sr_mod cdrom dm_service_time sd_mod 8021q
    Nov 27 14:41:57 nimbus02 kernel: garp stp llc mrp mxm_wmi mgag200 i2c_algo_bit drm_kms_helper scsi_transport_iscsi syscopyarea sysfillrect sysimgblt fb_sys_fops ttm crc32_pclmul lpfc crc32c_intel drm ghash_clmulni_intel aesni_intel dm_multipath lrw gf128mul bnx2x glue_helper ablk_helper cryptd ahci libahci libata crc_t10dif crct10dif_generic crct10dif_pclmul i2c_core megaraid_sas scsi_transport_fc ptp scsi_tgt pps_core mdio crct10dif_common libcrc32c wmi sunrpc dm_mirror dm_region_hash dm_log dm_mod
    Nov 27 14:41:57 nimbus02 kernel: CPU: 1 PID: 30694 Comm: jsonrpc/3 Not tainted 3.10.0-693.5.2.el7.x86_64 #1
    Nov 27 14:41:57 nimbus02 kernel: Hardware name: Dell Inc. PowerEdge R730/0WCJNT, BIOS 2.4.3 01/17/2017
    Nov 27 14:41:57 nimbus02 kernel: 0000000000000000 00000000eed296d3 ffff887f5b9a3960 ffffffff816a3e51
    Nov 27 14:41:57 nimbus02 kernel: ffff887f5b9a39a0 ffffffff810879d8 00000315c7659e78 ffff8859c7659e78
    Nov 27 14:41:57 nimbus02 kernel: ffff8844ebffd380 0000000000000000 ffff8859c7659e40 ffff885b77bf8000
    Nov 27 14:41:57 nimbus02 kernel: Call Trace:
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff816a3e51>] dump_stack+0x19/0x1b
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff810879d8>] __warn+0xd8/0x100
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff81087b1d>] warn_slowpath_null+0x1d/0x20
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc0977220>] nfs4_pnfs_ds_connect+0x300/0x3c0 [nfsv4]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff811df623>] ? kmem_cache_alloc+0x193/0x1e0
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc08e4424>] nfs4_fl_prepare_ds+0xa4/0xcc [nfs_layout_nfsv41_files]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc08e2d2c>] filelayout_write_pagelist+0x4c/0x180 [nfs_layout_nfsv41_files]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc097233c>] pnfs_generic_pg_writepages+0xac/0x1b0 [nfsv4]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc08799b7>] nfs_pageio_doio+0x27/0x60 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc087a467>] __nfs_pageio_add_request+0xb7/0x440 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc086fd71>] ? nfs_get_lock_context+0x51/0x100 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc087aea2>] nfs_pageio_add_request+0xc2/0x2a0 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc0877e65>] nfs_direct_write_schedule_segment+0x1f5/0x380 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc0878a65>] nfs_direct_write_schedule_iovec+0x105/0x240 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc0879218>] nfs_file_direct_write+0x298/0x470 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffffc086e869>] nfs_file_write+0x139/0x1e0 [nfs]
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff812001ed>] do_sync_write+0x8d/0xd0
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff81200cad>] vfs_write+0xbd/0x1e0
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff81201abf>] SyS_write+0x7f/0xe0
    Nov 27 14:41:57 nimbus02 kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
    Nov 27 14:41:57 nimbus02 kernel: ---[ end trace e3f627837d37deda ]---


6. soft-fence attempt completed, vdsm still running (D state?)

    Nov 27 14:42:07 nimbus02 systemd: vdsmd.service still around after SIGKILL. Ignoring.
    Nov 27 14:42:38 nimbus02 systemd: vdsmd.service failed.


7. Vdsm start attempt failed because vdsm could not be killed in 6.

    Nov 27 14:42:41 nimbus02 systemd: Started Virtual Desktop Server Manager.
    Nov 27 14:42:42 nimbus02 journal: vdsm vds ERROR Exception ... #012error: [Errno 98] Address already in use


8. Another warning about vdsm stuck monitor thread

    Nov 27 14:42:52 nimbus02 kernel: INFO: task monitor/7ddec48:171835 blocked for more than 120 seconds.
    Nov 27 14:42:52 nimbus02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Nov 27 14:42:52 nimbus02 kernel: monitor/7ddec48 D 0000000000000000     0 171835 171829 0x00000084
    Nov 27 14:42:52 nimbus02 kernel: ffff88434a257e20 0000000000000082 ffff8866ae7ddee0 ffff88434a257fd8
    Nov 27 14:42:52 nimbus02 kernel: ffff88434a257fd8 ffff88434a257fd8 ffff8866ae7ddee0 ffff88391c138e10
    Nov 27 14:42:52 nimbus02 kernel: ffff88407ff80a08 ffff88434a257e58 ffff88391c138eb8 0000000000000000
    Nov 27 14:42:52 nimbus02 kernel: Call Trace:
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff816a9589>] schedule+0x29/0x70
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff8121cca9>] inode_dio_wait+0xd9/0x100
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff810b19e0>] ? wake_bit_function+0x40/0x40
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffffc0960d9b>] nfs4_file_fsync+0x3b/0x150 [nfsv4]
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff812333cb>] vfs_fsync+0x2b/0x40
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffffc0960fee>] nfs4_file_flush+0x5e/0x90 [nfsv4]
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff811fe294>] filp_close+0x34/0x80
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff81220388>] __close_fd+0x78/0xa0
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff811ffd03>] SyS_close+0x23/0x50
    Nov 27 14:42:52 nimbus02 kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b


9. Another warning about jsonrpc/3 stuck when trying to close a file descriptor
   when shutting down vdsm.

    Nov 27 14:44:52 nimbus02 kernel: INFO: task jsonrpc/3:171829 blocked for more than 120 seconds.
    Nov 27 14:44:52 nimbus02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Nov 27 14:44:52 nimbus02 kernel: jsonrpc/3       D 0000000000000000     0 171829      1 0x00000086
    Nov 27 14:44:52 nimbus02 kernel: ffff887f59757b50 0000000000000046 ffff8866ae7daf70 ffff887f59757fd8
    Nov 27 14:44:52 nimbus02 kernel: ffff887f59757fd8 ffff887f59757fd8 ffff8866ae7daf70 ffff88391c138e10
    Nov 27 14:44:52 nimbus02 kernel: ffff88407ff80a08 ffff887f59757b88 ffff88391c138eb8 0000000000000000
    Nov 27 14:44:52 nimbus02 kernel: Call Trace:
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff816a9589>] schedule+0x29/0x70
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff8121cca9>] inode_dio_wait+0xd9/0x100
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff810b19e0>] ? wake_bit_function+0x40/0x40
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffffc0960d9b>] nfs4_file_fsync+0x3b/0x150 [nfsv4]
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff81348ed1>] ? __percpu_counter_add+0x51/0x70
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff812333cb>] vfs_fsync+0x2b/0x40
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffffc0960fee>] nfs4_file_flush+0x5e/0x90 [nfsv4]
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff811fe294>] filp_close+0x34/0x80
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff81220008>] put_files_struct+0x88/0xe0
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff81220109>] exit_files+0x49/0x50
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff8108d9ef>] do_exit+0x2af/0xa40
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff81327ae6>] ? plist_del+0x46/0x70
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff810f4a0c>] ? __unqueue_futex+0x2c/0x60
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff810f5d9f>] ? futex_wait+0x11f/0x280
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff811adfed>] ? wp_page_copy.isra.58+0x2dd/0x470
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff8108e1ff>] do_group_exit+0x3f/0xa0
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff8109e32e>] get_signal_to_deliver+0x1ce/0x5e0
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff8102a467>] do_signal+0x57/0x6c0
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff816b0091>] ? __do_page_fault+0x171/0x450
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff8102ab2f>] do_notify_resume+0x5f/0xb0
    Nov 27 14:44:52 nimbus02 kernel: [<ffffffff816b533d>] int_signal+0x12/0x17


10. Many failed attempts to start vdsm, since vdms is stuck on nfs in the kernel


11. Someone logs as root and reboot the host

    Nov 27 16:53:36 nimbus02 systemd-logind: New session 51 of user root.
    Nov 27 16:53:36 nimbus02 systemd: Started Session 51 of user root.
    Nov 27 16:53:36 nimbus02 systemd: Starting Session 51 of user root.
    Nov 27 16:53:40 nimbus02 systemd: Stopping Virtual Machine qemu-22-lap01mwauto.
    Nov 27 16:53:40 nimbus02 systemd: Stopping Virtual Machine qemu-20-lat02openvas.int.teb.com.tr.

12. Boot (16:59:12,000+0300)

    Nov 27 19:59:12 nimbus02 kernel: Command line: BOOT_IMAGE=/rhvh-4.1-0.20171101.0+1/vmlinuz-3.10.0-693.5.2.el7.x86_64 root=/dev/rhvh_nimbus02/rhvh-4.1-0.20171101.0+1 ro crashkernel=auto rd.lvm.lv=rhvh_nimbus02/rhvh-4.1-0.20171101.0+1 rd.lvm.lv=rhvh_nimbus02/swap rhgb quiet LANG=en_US.UTF-8 img.bootid=rhvh-4.1-0.20171101.0+1

13. Vdsm successful restart

    2017-11-27 17:00:25,310+0300 INFO  (MainThread) [vds] (PID: 29066) I am the actual vdsm 4.19.36-1.el7ev nimbus02.int.teb.com.tr (3.10.0-693.5.2.el7.x86_64) (vdsm:145)


So we have several issues:

1. Vdsm hang for hours accessing nfs server, until host is rebooted.

2. Vdsm block storage domain thread blocked on NFS, trying to close a file
   descriptor. I susspect that this is just a side effect of doing I/O to NFS
   sever in Vdsm, since monitor threads never do any I/O to NFS directly, and
   block storge domain never access NFS even indirectly.

3. systemd ignores kill failure when stopping a service and try to start it
   again - needs a system bug.

4. Kernel issue in nfs, may be the reason for 1 - may need a kernel bug.

Issue 1 is caused by python bug 1520068. If only the thread accessing the non
responsive NFS server was blocked, vdsm could continue to function normally.

However, even if only one thread was blocked, if there was a need to restart
vdsm because of other issue, restarting vdsm would not be possible, since you
cannot kill a process with one thread in D state.

We avoid this issues in vdsm by performing the I/O in a child process, but this
was not implemented yet for xlease volume operations. So regardless of the
python bug we need to do xlease I/O outside of Vdsm. This wil proably eliminate
issue 2.

Comment 26 Allon Mureinik 2017-12-06 11:55:10 UTC
Nir, any reason not to backport these fixes to 4.1.z once they're ready?

Comment 27 Nir Soffer 2017-12-06 12:08:29 UTC
(In reply to Allon Mureinik from comment #26)
> Nir, any reason not to backport these fixes to 4.1.z once they're ready?

I think we can and should backport it.

Comment 28 Nir Soffer 2017-12-10 14:36:57 UTC
Removed unrelated patches with wrong Bug-Url link.

Comment 30 Allon Mureinik 2017-12-20 16:42:18 UTC
Bug 1520068 fixes the underlying issue in python.
Regardless, we can solve this BZ by changing our code not to directly perform I/O in python, which is just a bad idea.

Resetting target to 4.2.1.
Once 1520068 is ready we'll require it, but we definitely won't wait for it.

Comment 31 Nir Soffer 2017-12-20 17:36:34 UTC
Removing the patch for using fix python, we will have another bug to track this.

Current storage code does not access NFS directly, so we are not effected now by
the python bug.

Comment 32 Kevin Alon Goldblatt 2018-01-28 19:20:46 UTC
Verified with the following code:
---------------------------------------
ovirt-engine-4.2.1.3-0.1.el7.noarch
vdsm-4.20.17-11.gite2d6775.el7.centos.x86_64


Verified with the following scenario:
---------------------------------------
Steps to Reproduce:
1. create nfs share, do not set 36:36 as owners
2. try to add it to rhev


Actual results:
warning about permission, and spm continues working


Moving to VERIFIED!

Comment 37 errata-xmlrpc 2018-05-15 17:52:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1489

Comment 38 Franta Kust 2019-05-16 13:03:00 UTC
BZ<2>Jira Resync

Comment 39 Daniel Gur 2019-08-28 13:11:29 UTC
sync2jira

Comment 40 Daniel Gur 2019-08-28 13:15:40 UTC
sync2jira


Note You need to log in before you can comment on or make changes to this bug.