Bug 1443654 - VDSM: too many tasks error (after network outage)
Summary: VDSM: too many tasks error (after network outage)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: ---
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ovirt-4.1.3
: 4.19.16
Assignee: Francesco Romani
QA Contact: Michael Burman
URL:
Whiteboard:
: 1475971 (view as bug list)
Depends On:
Blocks: 1470526
TreeView+ depends on / blocked
 
Reported: 2017-04-19 15:59 UTC by gshinar
Modified: 2020-09-10 10:29 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1470526 (view as bug list)
Environment:
Last Closed: 2017-07-06 13:40:41 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)
vdsm log (532.89 KB, application/x-xz)
2017-04-20 06:40 UTC, gshinar
no flags Details
new vdsm log - testing fix (268.27 KB, application/x-xz)
2017-06-06 08:45 UTC, Michael Burman
no flags Details
vdsm in debug (680.60 KB, application/x-gzip)
2017-06-06 14:31 UTC, Michael Burman
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 75767 0 master MERGED vm: blockIoTune: add and use cache 2021-01-01 17:02:33 UTC
oVirt gerrit 75768 0 master ABANDONED vm: blockIoTune: guard potentially blocking calls 2021-01-01 17:02:33 UTC
oVirt gerrit 75944 0 master MERGED vm: use response.success() in setIoTune 2021-01-01 17:02:33 UTC
oVirt gerrit 76493 0 master MERGED vm: iotune: simplify the interleaved get/set test 2021-01-01 17:02:33 UTC
oVirt gerrit 76631 0 ovirt-4.1 MERGED vm: blockIoTune: add and use cache 2021-01-01 17:02:30 UTC
oVirt gerrit 76632 0 ovirt-4.1 MERGED vm: iotune: simplify the interleaved get/set test 2021-01-01 17:02:30 UTC

Description gshinar 2017-04-19 15:59:25 UTC
Description of problem:
It all began with the networking going down badly:
Apr 19 08:36:29 cinteg05 kernel: igb 0000:01:00.0 eno1: igb: eno1 NIC Link is Down
Apr 19 08:36:29 cinteg05 kernel: igb 0000:01:00.0 eno1: speed changed to 0 for port eno1
Apr 19 08:36:29 cinteg05 kernel: bond0: link status definitely down for interface eno1, disabling it
Apr 19 08:36:29 cinteg05 kernel: bond0: now running without any active interface!
Apr 19 08:36:30 cinteg05 kernel: rhevm: port 1(bond0) entered disabled state


Version-Release number of selected component (if applicable):
4.19.2-2


How reproducible:
It is currently occurring on our integration engine


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Yaniv Kaul 2017-04-19 17:07:15 UTC
The bug description is incorrect. Network may go down, I don't know why it did - check with IT. The issue is the too many tasks. Please provide VDSM log from that time.

Comment 2 gshinar 2017-04-20 06:40:22 UTC
Created attachment 1272867 [details]
vdsm log

Comment 3 gshinar 2017-04-20 06:42:11 UTC
When creating the bug, I have attached that same log I have just attached now. No idea why it was missing.

Comment 4 Francesco Romani 2017-04-21 13:03:25 UTC
(In reply to gshinar from comment #0)
> Description of problem:
> It all began with the networking going down badly:
> Apr 19 08:36:29 cinteg05 kernel: igb 0000:01:00.0 eno1: igb: eno1 NIC Link
> is Down
> Apr 19 08:36:29 cinteg05 kernel: igb 0000:01:00.0 eno1: speed changed to 0
> for port eno1
> Apr 19 08:36:29 cinteg05 kernel: bond0: link status definitely down for
> interface eno1, disabling it
> Apr 19 08:36:29 cinteg05 kernel: bond0: now running without any active
> interface!
> Apr 19 08:36:30 cinteg05 kernel: rhevm: port 1(bond0) entered disabled state
> 
> 
> Version-Release number of selected component (if applicable):
> 4.19.2-2
> 
> 
> How reproducible:
> It is currently occurring on our integration engine

So, to reproduce myself:
1. start few VMs (which storage domain are you using?)
2. let then run for a while
3. unplug the network cable for a while (how much?)
4. reconnect the network
Anything else?

Does it recover once you reconnect or does it stay like this forever?

The issue here is our flooding prevention failed. We have code that tries to assess the health/responsiveness of libvirt and *does not* submit more tasks if some disruption is detected, exactly to prevent TooManyTasks.

This failed and we are submitting tasks that could not be executed, because libvirtd is stuck. Eventually the task queue becomes full, hence the TooManyTasks.

In turn, TooManyTasks is not deadly - "just" degraded state, if the system recover afterwards.

Comment 5 Francesco Romani 2017-04-21 13:50:51 UTC
wo issues here, because two executor pools are depleted.

1. the periodic pool is exausted but the code is handling it:

2017-04-19 09:00:49,162 WARN  (vdsm.Scheduler) [virt.periodic.Operation] could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0x350ee10>, executor queue full (periodic:218)
2017-04-19 09:00:51,162 WARN  (vdsm.Scheduler) [virt.periodic.Operation] could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0x350ee10>, executor queue full (periodic:218)

We need to learn what caused it to run out, the candidate is the "isDomainReadyForCommands" which could pile up in known extreme circumstances, such as the libvirt executors run out.

So, what could make the libvirt worker pool go out of service?

2. the "TooManyTasks" actually comes from the jsonrpc executor, which has much less strong protection than the periodic pool.

We see

2017-04-19 09:00:36,917 ERROR (jsonrpc/1) [virt.vm] (vmId='6803c129-6cad-4160-988e-be2acf331a6a') getVmIoTune failed (vm:2773)
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2758, in getIoTuneResponse
    libvirt.VIR_DOMAIN_AFFECT_LIVE)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 77, in f
    raise toe
TimeoutError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainGetBlockIoTune)

Unfortunately, virDomainGetBlockIoTune ultimately needs to enter the QEMU monitor, so this call could be our culprit.
One possible sequence of events:

1. MOM queries periodically for the BlockIoTune values
2. the JSONRPC subsystem performs those risky calls, unprotected
3. libvirtd get stuck
4. the GetBlockIoTune calls start to get stuck in libvirtd
5. the libvirtd worker pool is depleted
6. the periodic code tries to assess the libvirt health, but those calls... need to access libvirtd, which has no worker pools available, so it gets stuck as well.

Root cause: unprotected access from the jsonrpc pool

one possible simple fix is to add one "busy" flag to libvirt domain inside Vdsm.
Last time we run benchmarks, this completely destroyed the performance of the periodic operations (steady state load too high just for monitoring VMs).

This is the main reason why we used the "health assessment" protocol. Unfortunately this protocol requires the cooperation of all the parties, and this seems to be the cause of this bug.

Comment 6 gshinar 2017-04-23 12:09:41 UTC
We are using NetApp. Domain name is: vserver-ci
It was a network outage. No idea how long it was. It was probably not less than an hour.

Comment 7 Tomas Jelinek 2017-04-24 07:14:30 UTC
could you please also answer the questions from Comment 4?

Comment 8 gshinar 2017-04-24 08:39:01 UTC
Stay like this forever. Only host and engine restart recovered the issue.

Comment 9 Meni Yakove 2017-06-01 12:55:27 UTC
Please add 'Steps to Reproduce'.

Comment 10 gshinar 2017-06-04 07:54:15 UTC
I haven't actually reproduced it back then so I can't be sure how to reproduce it now.
You can try by imitating a network outage for a few hours and then, after bringing back the network, check if all hosts services came up properly.

Comment 11 Michael Burman 2017-06-04 12:59:33 UTC
Don't think this should be ON_QA

I still see this:

2017-06-04 15:54:10,694+0300 ERROR (JsonRpcServer) [jsonrpc.JsonRpcServer] could not allocate request thread (__init__:655)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest
    JsonRpcTask(self._serveRequest, ctx, request)
  File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 154, in dispatch
    self._tasks.put(Task(callable, timeout, discard))
  File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 408, in put
    raise TooManyTasks()
TooManyTasks
2017-06-04 15:54:10,695+0300 ERROR (JsonRpcServer) [jsonrpc.JsonRpcServer] could not allocate request thread (__init__:655)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest
    JsonRpcTask(self._serveRequest, ctx, request)
  File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 154, in dispatch
    self._tasks.put(Task(callable, timeout, discard))
  File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 408, in put
    raise TooManyTasks()
TooManyTasks


systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-06-02 14:14:48 IDT; 2 days ago
  Process: 4858 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
  Process: 4912 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 4979 (vdsm)
   CGroup: /system.slice/vdsmd.service
           ├─4979 /usr/bin/python2 /usr/share/vdsm/vdsm
           ├─8906 /usr/libexec/ioprocess --read-pipe-fd 46 --write-pipe-fd 45 --max-threads 10 --max-queued-requests 10
           ├─8922 /usr/libexec/ioprocess --read-pipe-fd 88 --write-pipe-fd 87 --max-threads 10 --max-queued-requests 10
           ├─8937 /usr/libexec/ioprocess --read-pipe-fd 96 --write-pipe-fd 95 --max-threads 10 --max-queued-requests 10
           ├─8951 /usr/libexec/ioprocess --read-pipe-fd 104 --write-pipe-fd 103 --max-threads 10 --max-queued-requests 10
           ├─9280 /usr/libexec/ioprocess --read-pipe-fd 56 --write-pipe-fd 55 --max-threads 10 --max-queued-requests 10
           ├─9288 /usr/libexec/ioprocess --read-pipe-fd 63 --write-pipe-fd 61 --max-threads 10 --max-queued-requests 10
           ├─9295 /usr/libexec/ioprocess --read-pipe-fd 71 --write-pipe-fd 70 --max-threads 10 --max-queued-requests 10
           └─9302 /usr/libexec/ioprocess --read-pipe-fd 79 --write-pipe-fd 78 --max-threads 10 --max-queued-requests 10

Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...
Jun 04 15:54:10 orchid-vds2.qa.lab.tlv.redhat.com vdsm[4979]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate request thread
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652, in _runRequest...

vdsm-4.19.17-1.el7ev.x86_64

Comment 12 Francesco Romani 2017-06-06 08:36:27 UTC
(In reply to Michael Burman from comment #11)
> Don't think this should be ON_QA
> 
> I still see this:
> 
> 2017-06-04 15:54:10,694+0300 ERROR (JsonRpcServer) [jsonrpc.JsonRpcServer]
> could not allocate request thread (__init__:655)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652,
> in _runRequest
>     JsonRpcTask(self._serveRequest, ctx, request)
>   File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 154, in
> dispatch
>     self._tasks.put(Task(callable, timeout, discard))
>   File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 408, in put
>     raise TooManyTasks()
> TooManyTasks
> 2017-06-04 15:54:10,695+0300 ERROR (JsonRpcServer) [jsonrpc.JsonRpcServer]
> could not allocate request thread (__init__:655)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 652,
> in _runRequest
>     JsonRpcTask(self._serveRequest, ctx, request)
>   File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 154, in
> dispatch
>     self._tasks.put(Task(callable, timeout, discard))
>   File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 408, in put
>     raise TooManyTasks()
> TooManyTasks

There could be more causes for this issue, we fixed the most evident so far.
On which circumstances do you see this error? Could you share the relevant Vdsm logs?

Comment 13 Michael Burman 2017-06-06 08:44:50 UTC
I saw it during and after the steps to reproduce this report.

Followed by comment#10. 
"
You can try by imitating a network outage for a few hours and then, after bringing back the network, check if all hosts services came up properly."

Comment 14 Michael Burman 2017-06-06 08:45:47 UTC
Created attachment 1285289 [details]
new vdsm log - testing fix

Comment 15 Michal Skrivanek 2017-06-06 09:13:01 UTC
From the log I can see at the start a problem was already ongoing, and it all got fixed at 15:54.
Please clarify the exact sequence of actions and add missing log including the start of issues. If that is not available please enable DEBUG level in vdsm and reproduce, and note exactly when did you disconnect and reconnect network/storage.
Thanks,
michal

Comment 16 Michael Burman 2017-06-06 11:29:23 UTC
Hi Michal,

I'm sorry, but i don't understand. What exactly should be verified here?
During the outage, should i see those vdsm errors of too many tasks or not?
What exactly i need to verify?
What do you mean the start of the issue? what is the issue? is it the errors during network outage? is the issue is the errors after the outage? 

I was told to bring network down for some hours and bring it up, to verify that vdsm is running and host running in rhv.
I verified that vdsm was running, but i saw a lot of those errors in vdsm regarding the too many tasks issue. It seems to me exactly as the original report by Gil. 

Please tell:
1) what exactly should be tested?
2) what i should see during the network outage in the log?
3) What i should see after network is up?
4) Should i see the too many tasks errors in the log or not?
5) After network is up, do i need to see 'jsonrpc.JsonRpcServer ERROR could not allocate request thread' on vdsmd status??

Thanks,

Comment 17 Michal Skrivanek 2017-06-06 11:45:14 UTC
Hi Michael,
there is a big distinction between the system recovering from the degraded state after the network condition is fixed vs comment #8.
AFAIU it did recover for you?
In addition the fix from Francesco was about tasks for checking the libvirt state whereas from your logs it seems different - a storage operation is the cause of TooManyTasks here.

I believe first and foremost we should verify that the system recovers from outage without any other intervention. Any other errors not affecting functionality can be followed up separately with lower severity.
I would suggest to open a followup bugs on storage monitoring, regarding the JSON-RPC error I'm not sure if it actually affected the functionality, but at the minimum it should be handled clearly not raising an exception, so it deserves a bug too

Comment 18 Michael Burman 2017-06-06 14:30:48 UTC
I have tested it again in debug mode - 

Start of network outage - 15:05
End of network outage - 17:21:30

Attaching the vdsm.log

Please let me know if this bug can be verified.
There are a lot of errors on vdsm log during network outage, i have no idea if that expected.

In the bottom line, host is operational in rhv and vdsmd is running.

Comment 19 Michael Burman 2017-06-06 14:31:44 UTC
Created attachment 1285428 [details]
vdsm in debug

Comment 20 Michal Skrivanek 2017-06-07 07:10:12 UTC
(In reply to Michael Burman from comment #18)
> I have tested it again in debug mode - 
> 
> Start of network outage - 15:05
> End of network outage - 17:21:30
> 
> Attaching the vdsm.log
> 
> Please let me know if this bug can be verified.
> There are a lot of errors on vdsm log during network outage, i have no idea
> if that expected.

Well, those should be followed up by storage to clean up the messages. Adding Nir to check that

> In the bottom line, host is operational in rhv and vdsmd is running.

I'm afraid it's not really reproduced the same as before, I do not see any TooManyTasks exceptions. It would be interesting to see those JSON-RPC-related errors with DEBUG enabled. 

Other than that as long as the system is functioning I think it's good enough.

Comment 21 Michael Burman 2017-06-11 14:18:48 UTC
According to last comments, host is up and operational after network outage(can't reproduce as on first attempt with TooManyTasks exceptions and JSON-RPC-related erros).
Based on this setting as verified. For any other issues/bugs, moving back to virt/storage to investigate and report bugs.

Verified on - vdsm-4.19.17

Comment 22 Nir Soffer 2017-06-13 21:09:49 UTC
(In reply to Michal Skrivanek from comment #20)
> (In reply to Michael Burman from comment #18)
> > There are a lot of errors on vdsm log during network outage, i have no idea
> > if that expected.
> 
> Well, those should be followed up by storage to clean up the messages.
> Adding Nir to check that

If you think the errors are a storage issue, open a bug so someone from storage
can take a look.

Comment 25 nijin ashok 2017-07-28 08:31:27 UTC
*** Bug 1475971 has been marked as a duplicate of this bug. ***

Comment 28 Miro Halas 2018-02-06 04:15:45 UTC
I believe this bug might be still present, as we just experienced network outage which seems to lead to outage of vdsm. The hypervisor was showing in nonresponsive state and vsdm was showing too many tasks error. We are using

[root@lmorlct0113brain1 ~]# yum info vdsm
Loaded plugins: imgbased-persist, product-id, search-disabled-repos, subscription-manager
Installed Packages
Name        : vdsm
Arch        : x86_64
Version     : 4.19.43
Release     : 3.el7ev
Size        : 2.6 M
Repo        : installed
Summary     : Virtual Desktop Server Manager
URL         : http://www.ovirt.org/develop/developer-guide/vdsm/vdsm/
License     : GPLv2+
Description : The VDSM service is required by a Virtualization Manager to manage the
            : Linux hosts. VDSM manages and monitors the host's storage, memory and
            : networks as well as virtual machine creation, other host administration
            : tasks, statistics gathering, and log collection.

and vdsm service was failing with following errors

[root@lmorlct0113brain1 ovirt-hosted-engine-ha]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-02-05 10:56:00 EST; 8s ago
  Process: 7301 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
  Process: 7304 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 7492 (vdsm)
   CGroup: /system.slice/vdsmd.service
           ├─7492 /usr/bin/python2 /usr/share/vdsm/vdsm
           ├─7557 /usr/libexec/ioprocess --read-pipe-fd 42 --write-pipe-fd 41 --max-threads 10 --max-queued-requests ...
           ├─7567 /usr/libexec/ioprocess --read-pipe-fd 50 --write-pipe-fd 49 --max-threads 10 --max-queued-requests ...
           ├─7575 /usr/libexec/ioprocess --read-pipe-fd 57 --write-pipe-fd 55 --max-threads 10 --max-queued-requests ...
           ├─7593 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 64 --max-threads 10 --max-queued-requests ...
           └─7607 /usr/libexec/ioprocess --read-pipe-fd 72 --write-pipe-fd 70 --max-threads 10 --max-queued-requests ...

Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm jsonrpc.JsonRpcServer ERROR could not allocate ...ead
                                                              Traceback (most recent call last):
                                                                File "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
Hint: Some lines were ellipsized, use -l to show in full.


the libvirt service had the following errors

[root@lmorlct0113brain1 vdsm]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/libvirtd.service.d
           └─unlimited-core.conf
   Active: active (running) since Tue 2018-01-30 22:05:51 EST; 5 days ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 13955 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─13955 /usr/sbin/libvirtd --listen

Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:11:35.936+0000: 13958: error : vi...ps)
Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:11:35.937+0000: 13958: error : re...led
Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:11:35.937+0000: 13955: error : vi...ror
Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:16.017+0000: 13959: error : vi...ps)
Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:16.017+0000: 13959: error : re...led
Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:16.018+0000: 13955: error : vi...ror
Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:38.148+0000: 13958: error : vi...ps)
Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:38.148+0000: 13958: error : re...led
Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:38.148+0000: 13955: error : vi...ror
Feb 05 10:55:48 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-05 15:55:48.523+0000: 13955: error : vi...ror
Hint: Some lines were ellipsized, use -l to show in full.
[root@lmorlct0113brain1 vdsm]# systemctl status libvirtd -l
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/libvirtd.service.d
           └─unlimited-core.conf
   Active: active (running) since Tue 2018-01-30 22:05:51 EST; 5 days ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 13955 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─13955 /usr/sbin/libvirtd --listen

Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:11:35.936+0000: 13958: error : virNetSASLSessionServerStart:541 : authentication failed: Failed to start SASL negotiation: -20 (SASL(-13): user not found: unable to canonify user and get auxprops)
Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:11:35.937+0000: 13958: error : remoteDispatchAuthSaslStart:3568 : authentication failed: authentication failed
Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:11:35.937+0000: 13955: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error
Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:16.017+0000: 13959: error : virNetSASLSessionServerStart:541 : authentication failed: Failed to start SASL negotiation: -20 (SASL(-13): user not found: unable to canonify user and get auxprops)
Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:16.017+0000: 13959: error : remoteDispatchAuthSaslStart:3568 : authentication failed: authentication failed
Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:16.018+0000: 13955: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error
Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:38.148+0000: 13958: error : virNetSASLSessionServerStart:541 : authentication failed: Failed to start SASL negotiation: -20 (SASL(-13): user not found: unable to canonify user and get auxprops)
Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:38.148+0000: 13958: error : remoteDispatchAuthSaslStart:3568 : authentication failed: authentication failed
Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-04 05:12:38.148+0000: 13955: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error
Feb 05 10:55:48 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]: 2018-02-05 15:55:48.523+0000: 13955: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error

with the following in the logs

[root@lmorlct0113brain1 log]# grep -i libvirt messages
Feb  5 10:55:48 lmorlct0113brain1 libvirtd: 2018-02-05 15:55:48.523+0000: 13955: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error
Feb  5 10:55:59 lmorlct0113brain1 vdsmd_init_common.sh: libvirt is already configured for vdsm
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet2 -g FP-vnet2' failed: iptables v1.4.21: goto 'FP-vnet2' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet2 -g FP-vnet2' failed: iptables v1.4.21: goto 'FP-vnet2' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet2 -g FJ-vnet2' failed: iptables v1.4.21: goto 'FJ-vnet2' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet2 -g HJ-vnet2' failed: iptables v1.4.21: goto 'HJ-vnet2' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet2 -g FP-vnet2' failed: ip6tables v1.4.21: goto 'FP-vnet2' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet2 -g FP-vnet2' failed: ip6tables v1.4.21: goto 'FP-vnet2' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet2 -g FJ-vnet2' failed: ip6tables v1.4.21: goto 'FJ-vnet2' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet2 -g HJ-vnet2' failed: ip6tables v1.4.21: goto 'HJ-vnet2' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet2 -j libvirt-J-vnet2' failed: Illegal target name 'libvirt-J-vnet2'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet2 -j libvirt-P-vnet2' failed: Illegal target name 'libvirt-P-vnet2'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet2' failed: Chain 'libvirt-J-vnet2' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet2' failed: Chain 'libvirt-P-vnet2' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet2' failed: Chain 'libvirt-J-vnet2' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet2' failed: Chain 'libvirt-J-vnet2' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet2' failed: Chain 'libvirt-P-vnet2' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet2' failed: Chain 'libvirt-P-vnet2' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet4 -g FP-vnet4' failed: iptables v1.4.21: goto 'FP-vnet4' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet4 -g FP-vnet4' failed: iptables v1.4.21: goto 'FP-vnet4' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet4 -g FJ-vnet4' failed: iptables v1.4.21: goto 'FJ-vnet4' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet4 -g HJ-vnet4' failed: iptables v1.4.21: goto 'HJ-vnet4' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet4 -g FP-vnet4' failed: ip6tables v1.4.21: goto 'FP-vnet4' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet4 -g FP-vnet4' failed: ip6tables v1.4.21: goto 'FP-vnet4' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet4 -g FJ-vnet4' failed: ip6tables v1.4.21: goto 'FJ-vnet4' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet4 -g HJ-vnet4' failed: ip6tables v1.4.21: goto 'HJ-vnet4' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet4 -j libvirt-J-vnet4' failed: Illegal target name 'libvirt-J-vnet4'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet4 -j libvirt-P-vnet4' failed: Illegal target name 'libvirt-P-vnet4'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet4' failed: Chain 'libvirt-J-vnet4' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet4' failed: Chain 'libvirt-P-vnet4' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet4' failed: Chain 'libvirt-J-vnet4' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet4' failed: Chain 'libvirt-J-vnet4' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet4' failed: Chain 'libvirt-P-vnet4' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet4' failed: Chain 'libvirt-P-vnet4' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet0' failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: iptables v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: ip6tables v1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: ip6tables v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' failed: Illegal target name 'libvirt-J-vnet0'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet1 -g FP-vnet1' failed: iptables v1.4.21: goto 'FP-vnet1' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet1 -g FP-vnet1' failed: iptables v1.4.21: goto 'FP-vnet1' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet1 -g FJ-vnet1' failed: iptables v1.4.21: goto 'FJ-vnet1' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet1 -g HJ-vnet1' failed: iptables v1.4.21: goto 'HJ-vnet1' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet1 -g FP-vnet1' failed: ip6tables v1.4.21: goto 'FP-vnet1' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet1 -g FP-vnet1' failed: ip6tables v1.4.21: goto 'FP-vnet1' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet1 -g FJ-vnet1' failed: ip6tables v1.4.21: goto 'FJ-vnet1' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet1 -g HJ-vnet1' failed: ip6tables v1.4.21: goto 'HJ-vnet1' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet1 -j libvirt-J-vnet1' failed: Illegal target name 'libvirt-J-vnet1'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet1 -j libvirt-P-vnet1' failed: Illegal target name 'libvirt-P-vnet1'.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet1' failed: Chain 'libvirt-J-vnet1' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet1' failed: Chain 'libvirt-P-vnet1' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet1' failed: Chain 'libvirt-J-vnet1' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet1' failed: Chain 'libvirt-J-vnet1' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet1' failed: Chain 'libvirt-P-vnet1' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet1' failed: Chain 'libvirt-P-vnet1' doesn't exist.
Feb  5 10:56:00 lmorlct0113brain1 vdsmd_init_common.sh: libvirt: Network Filter Driver error : Requested operation is not valid: nwfilter is in use
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet3 -j libvirt-J-vnet3' failed: Illegal target name 'libvirt-J-vnet3'.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet3 -j libvirt-P-vnet3' failed: Illegal target name 'libvirt-P-vnet3'.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet3' failed: Chain 'libvirt-J-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet3' failed: Chain 'libvirt-P-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet3' failed: Chain 'libvirt-J-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet3' failed: Chain 'libvirt-J-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet3' failed: Chain 'libvirt-P-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet3' failed: Chain 'libvirt-P-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet3 -g FO-vnet3' failed: iptables v1.4.21: goto 'FO-vnet3' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet3 -g FO-vnet3' failed: iptables v1.4.21: goto 'FO-vnet3' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet3 -g FI-vnet3' failed: iptables v1.4.21: goto 'FI-vnet3' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet3 -g HI-vnet3' failed: iptables v1.4.21: goto 'HI-vnet3' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet3 -g FO-vnet3' failed: ip6tables v1.4.21: goto 'FO-vnet3' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet3 -g FO-vnet3' failed: ip6tables v1.4.21: goto 'FO-vnet3' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet3 -g FI-vnet3' failed: ip6tables v1.4.21: goto 'FI-vnet3' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet3 -g HI-vnet3' failed: ip6tables v1.4.21: goto 'HI-vnet3' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet3 -j libvirt-I-vnet3' failed: Illegal target name 'libvirt-I-vnet3'.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet3 -j libvirt-O-vnet3' failed: Illegal target name 'libvirt-O-vnet3'.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-I-vnet3' failed: Chain 'libvirt-I-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet3' failed: Chain 'libvirt-O-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-I-vnet3' failed: Chain 'libvirt-I-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-I-vnet3' failed: Chain 'libvirt-I-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet3' failed: Chain 'libvirt-O-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet3' failed: Chain 'libvirt-O-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet3' failed: Chain 'libvirt-P-vnet3' doesn't exist.
Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -E libvirt-P-vnet3 libvirt-O-vnet3' failed: Chain 'libvirt-P-vnet3' doesn't exist.


after restarting libvirt manually

[root@lmorlct0113brain1 log]# systemctl restart libvirtd
[root@lmorlct0113brain1 log]#
[root@lmorlct0113brain1 log]#
[root@lmorlct0113brain1 log]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/libvirtd.service.d
           └─unlimited-core.conf
   Active: active (running) since Mon 2018-02-05 11:10:30 EST; 2s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 19147 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─19147 /usr/sbin/libvirtd --listen

Feb 05 11:10:29 lmorlct0113brain1.labs.lenovo.com systemd[1]: Starting Virtualization daemon...
Feb 05 11:10:30 lmorlct0113brain1.labs.lenovo.com systemd[1]: Started Virtualization daemon.

vdsm seemed to stabilize as well

[root@lmorlct0113brain1 log]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-02-05 11:11:03 EST; 4s ago
  Process: 19141 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
  Process: 19339 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 19988 (vdsm)
   CGroup: /system.slice/vdsmd.service
           ├─19988 /usr/bin/python2 /usr/share/vdsm/vdsm
           ├─20051 /usr/libexec/ioprocess --read-pipe-fd 39 --write-pipe-fd 38 --max-threads 10 --max-queued-requests...
           ├─20061 /usr/libexec/ioprocess --read-pipe-fd 46 --write-pipe-fd 44 --max-threads 10 --max-queued-requests...
           ├─20069 /usr/libexec/ioprocess --read-pipe-fd 56 --write-pipe-fd 55 --max-threads 10 --max-queued-requests...
           ├─20077 /usr/libexec/ioprocess --read-pipe-fd 64 --write-pipe-fd 63 --max-threads 10 --max-queued-requests...
           ├─20113 /usr/libexec/ioprocess --read-pipe-fd 75 --write-pipe-fd 74 --max-threads 10 --max-queued-requests...
           ├─20239 /usr/libexec/ioprocess --read-pipe-fd 89 --write-pipe-fd 87 --max-threads 10 --max-queued-requests...
           └─20264 /usr/bin/dd of=/rhev/data-center/5a708dff-0234-0203-023a-000000000090/mastersd/dom_md/inbox iflag=...

Feb 05 11:11:03 lmorlct0113brain1.labs.lenovo.com systemd[1]: Started Virtual Desktop Server Manager.
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN Not ready yet, ignoring event u'|virt...337
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN Not ready yet, ignoring event u'|virt...49'
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN Not ready yet, ignoring event u'|virt...103
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN Not ready yet, ignoring event u'|virt...077
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm throttled WARN MOM not available.
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm throttled WARN MOM not available, KSM stats wi...ng.
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN Not ready yet, ignoring event u'|virt...637
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN Not ready yet, ignoring event u'|virt... u'
Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN Not ready yet, ignoring event u'|virt..., u
Hint: Some lines were ellipsized, use -l to show in full.

Comment 29 Michael Burman 2018-02-06 06:50:06 UTC
(In reply to Miro Halas from comment #28)
> I believe this bug might be still present, as we just experienced network
> outage which seems to lead to outage of vdsm. The hypervisor was showing in
> nonresponsive state and vsdm was showing too many tasks error. We are using
> 
> [root@lmorlct0113brain1 ~]# yum info vdsm
> Loaded plugins: imgbased-persist, product-id, search-disabled-repos,
> subscription-manager
> Installed Packages
> Name        : vdsm
> Arch        : x86_64
> Version     : 4.19.43
> Release     : 3.el7ev
> Size        : 2.6 M
> Repo        : installed
> Summary     : Virtual Desktop Server Manager
> URL         : http://www.ovirt.org/develop/developer-guide/vdsm/vdsm/
> License     : GPLv2+
> Description : The VDSM service is required by a Virtualization Manager to
> manage the
>             : Linux hosts. VDSM manages and monitors the host's storage,
> memory and
>             : networks as well as virtual machine creation, other host
> administration
>             : tasks, statistics gathering, and log collection.
> 
> and vdsm service was failing with following errors
> 
> [root@lmorlct0113brain1 ovirt-hosted-engine-ha]# systemctl status vdsmd
> ● vdsmd.service - Virtual Desktop Server Manager
>    Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Mon 2018-02-05 10:56:00 EST; 8s ago
>   Process: 7301 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
> --post-stop (code=exited, status=0/SUCCESS)
>   Process: 7304 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
> --pre-start (code=exited, status=0/SUCCESS)
> Main PID: 7492 (vdsm)
>    CGroup: /system.slice/vdsmd.service
>            ├─7492 /usr/bin/python2 /usr/share/vdsm/vdsm
>            ├─7557 /usr/libexec/ioprocess --read-pipe-fd 42 --write-pipe-fd
> 41 --max-threads 10 --max-queued-requests ...
>            ├─7567 /usr/libexec/ioprocess --read-pipe-fd 50 --write-pipe-fd
> 49 --max-threads 10 --max-queued-requests ...
>            ├─7575 /usr/libexec/ioprocess --read-pipe-fd 57 --write-pipe-fd
> 55 --max-threads 10 --max-queued-requests ...
>            ├─7593 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd
> 64 --max-threads 10 --max-queued-requests ...
>            └─7607 /usr/libexec/ioprocess --read-pipe-fd 72 --write-pipe-fd
> 70 --max-threads 10 --max-queued-requests ...
> 
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Feb 05 10:56:08 lmorlct0113brain1.labs.lenovo.com vdsm[7492]: vdsm
> jsonrpc.JsonRpcServer ERROR could not allocate ...ead
>                                                               Traceback
> (most recent call last):
>                                                                 File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__in...
> Hint: Some lines were ellipsized, use -l to show in full.
> 
> 
> the libvirt service had the following errors
> 
> [root@lmorlct0113brain1 vdsm]# systemctl status libvirtd
> ● libvirtd.service - Virtualization daemon
>    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor
> preset: enabled)
>   Drop-In: /etc/systemd/system/libvirtd.service.d
>            └─unlimited-core.conf
>    Active: active (running) since Tue 2018-01-30 22:05:51 EST; 5 days ago
>      Docs: man:libvirtd(8)
>            http://libvirt.org
>  Main PID: 13955 (libvirtd)
>    CGroup: /system.slice/libvirtd.service
>            └─13955 /usr/sbin/libvirtd --listen
> 
> Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:11:35.936+0000: 13958: error : vi...ps)
> Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:11:35.937+0000: 13958: error : re...led
> Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:11:35.937+0000: 13955: error : vi...ror
> Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:16.017+0000: 13959: error : vi...ps)
> Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:16.017+0000: 13959: error : re...led
> Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:16.018+0000: 13955: error : vi...ror
> Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:38.148+0000: 13958: error : vi...ps)
> Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:38.148+0000: 13958: error : re...led
> Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:38.148+0000: 13955: error : vi...ror
> Feb 05 10:55:48 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-05 15:55:48.523+0000: 13955: error : vi...ror
> Hint: Some lines were ellipsized, use -l to show in full.
> [root@lmorlct0113brain1 vdsm]# systemctl status libvirtd -l
> ● libvirtd.service - Virtualization daemon
>    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor
> preset: enabled)
>   Drop-In: /etc/systemd/system/libvirtd.service.d
>            └─unlimited-core.conf
>    Active: active (running) since Tue 2018-01-30 22:05:51 EST; 5 days ago
>      Docs: man:libvirtd(8)
>            http://libvirt.org
>  Main PID: 13955 (libvirtd)
>    CGroup: /system.slice/libvirtd.service
>            └─13955 /usr/sbin/libvirtd --listen
> 
> Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:11:35.936+0000: 13958: error :
> virNetSASLSessionServerStart:541 : authentication failed: Failed to start
> SASL negotiation: -20 (SASL(-13): user not found: unable to canonify user
> and get auxprops)
> Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:11:35.937+0000: 13958: error :
> remoteDispatchAuthSaslStart:3568 : authentication failed: authentication
> failed
> Feb 04 00:11:35 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:11:35.937+0000: 13955: error : virNetSocketReadWire:1808 : End
> of file while reading data: Input/output error
> Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:16.017+0000: 13959: error :
> virNetSASLSessionServerStart:541 : authentication failed: Failed to start
> SASL negotiation: -20 (SASL(-13): user not found: unable to canonify user
> and get auxprops)
> Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:16.017+0000: 13959: error :
> remoteDispatchAuthSaslStart:3568 : authentication failed: authentication
> failed
> Feb 04 00:12:16 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:16.018+0000: 13955: error : virNetSocketReadWire:1808 : End
> of file while reading data: Input/output error
> Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:38.148+0000: 13958: error :
> virNetSASLSessionServerStart:541 : authentication failed: Failed to start
> SASL negotiation: -20 (SASL(-13): user not found: unable to canonify user
> and get auxprops)
> Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:38.148+0000: 13958: error :
> remoteDispatchAuthSaslStart:3568 : authentication failed: authentication
> failed
> Feb 04 00:12:38 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-04 05:12:38.148+0000: 13955: error : virNetSocketReadWire:1808 : End
> of file while reading data: Input/output error
> Feb 05 10:55:48 lmorlct0113brain1.labs.lenovo.com libvirtd[13955]:
> 2018-02-05 15:55:48.523+0000: 13955: error : virNetSocketReadWire:1808 : End
> of file while reading data: Input/output error
> 
> with the following in the logs
> 
> [root@lmorlct0113brain1 log]# grep -i libvirt messages
> Feb  5 10:55:48 lmorlct0113brain1 libvirtd: 2018-02-05 15:55:48.523+0000:
> 13955: error : virNetSocketReadWire:1808 : End of file while reading data:
> Input/output error
> Feb  5 10:55:59 lmorlct0113brain1 vdsmd_init_common.sh: libvirt is already
> configured for vdsm
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet2 -g FP-vnet2' failed: iptables v1.4.21: goto 'FP-vnet2'
> is not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet2 -g
> FP-vnet2' failed: iptables v1.4.21: goto 'FP-vnet2' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet2 -g
> FJ-vnet2' failed: iptables v1.4.21: goto 'FJ-vnet2' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet2
> -g HJ-vnet2' failed: iptables v1.4.21: goto 'HJ-vnet2' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet2 -g FP-vnet2' failed: ip6tables v1.4.21: goto 'FP-vnet2'
> is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet2 -g
> FP-vnet2' failed: ip6tables v1.4.21: goto 'FP-vnet2' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet2 -g
> FJ-vnet2' failed: ip6tables v1.4.21: goto 'FJ-vnet2' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet2
> -g HJ-vnet2' failed: ip6tables v1.4.21: goto 'HJ-vnet2' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet2 -j
> libvirt-J-vnet2' failed: Illegal target name 'libvirt-J-vnet2'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet2 -j
> libvirt-P-vnet2' failed: Illegal target name 'libvirt-P-vnet2'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet2' failed: Chain
> 'libvirt-J-vnet2' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet2' failed: Chain
> 'libvirt-P-vnet2' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet2' failed: Chain
> 'libvirt-J-vnet2' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet2' failed: Chain
> 'libvirt-J-vnet2' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet2' failed: Chain
> 'libvirt-P-vnet2' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet2' failed: Chain
> 'libvirt-P-vnet2' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet4 -g FP-vnet4' failed: iptables v1.4.21: goto 'FP-vnet4'
> is not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet4 -g
> FP-vnet4' failed: iptables v1.4.21: goto 'FP-vnet4' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet4 -g
> FJ-vnet4' failed: iptables v1.4.21: goto 'FJ-vnet4' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet4
> -g HJ-vnet4' failed: iptables v1.4.21: goto 'HJ-vnet4' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet4 -g FP-vnet4' failed: ip6tables v1.4.21: goto 'FP-vnet4'
> is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet4 -g
> FP-vnet4' failed: ip6tables v1.4.21: goto 'FP-vnet4' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet4 -g
> FJ-vnet4' failed: ip6tables v1.4.21: goto 'FJ-vnet4' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet4
> -g HJ-vnet4' failed: ip6tables v1.4.21: goto 'HJ-vnet4' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet4 -j
> libvirt-J-vnet4' failed: Illegal target name 'libvirt-J-vnet4'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet4 -j
> libvirt-P-vnet4' failed: Illegal target name 'libvirt-P-vnet4'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet4' failed: Chain
> 'libvirt-J-vnet4' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet4' failed: Chain
> 'libvirt-P-vnet4' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet4' failed: Chain
> 'libvirt-J-vnet4' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet4' failed: Chain
> 'libvirt-J-vnet4' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet4' failed: Chain
> 'libvirt-P-vnet4' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet4' failed: Chain
> 'libvirt-P-vnet4' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet0 -g FP-vnet0' failed: iptables v1.4.21: goto 'FP-vnet0'
> is not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g
> FP-vnet0' failed: iptables v1.4.21: goto 'FP-vnet0' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g
> FJ-vnet0' failed: iptables v1.4.21: goto 'FJ-vnet0' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0
> -g HJ-vnet0' failed: iptables v1.4.21: goto 'HJ-vnet0' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21: goto 'FP-vnet0'
> is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g
> FP-vnet0' failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g
> FJ-vnet0' failed: ip6tables v1.4.21: goto 'FJ-vnet0' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0
> -g HJ-vnet0' failed: ip6tables v1.4.21: goto 'HJ-vnet0' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> libvirt-J-vnet0' failed: Illegal target name 'libvirt-J-vnet0'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed: Chain
> 'libvirt-J-vnet0' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> 'libvirt-J-vnet0' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> 'libvirt-J-vnet0' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet1 -g FP-vnet1' failed: iptables v1.4.21: goto 'FP-vnet1'
> is not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet1 -g
> FP-vnet1' failed: iptables v1.4.21: goto 'FP-vnet1' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet1 -g
> FJ-vnet1' failed: iptables v1.4.21: goto 'FJ-vnet1' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet1
> -g HJ-vnet1' failed: iptables v1.4.21: goto 'HJ-vnet1' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet1 -g FP-vnet1' failed: ip6tables v1.4.21: goto 'FP-vnet1'
> is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet1 -g
> FP-vnet1' failed: ip6tables v1.4.21: goto 'FP-vnet1' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet1 -g
> FJ-vnet1' failed: ip6tables v1.4.21: goto 'FJ-vnet1' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet1
> -g HJ-vnet1' failed: ip6tables v1.4.21: goto 'HJ-vnet1' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet1 -j
> libvirt-J-vnet1' failed: Illegal target name 'libvirt-J-vnet1'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet1 -j
> libvirt-P-vnet1' failed: Illegal target name 'libvirt-P-vnet1'.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet1' failed: Chain
> 'libvirt-J-vnet1' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet1' failed: Chain
> 'libvirt-P-vnet1' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet1' failed: Chain
> 'libvirt-J-vnet1' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet1' failed: Chain
> 'libvirt-J-vnet1' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet1' failed: Chain
> 'libvirt-P-vnet1' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet1' failed: Chain
> 'libvirt-P-vnet1' doesn't exist.
> Feb  5 10:56:00 lmorlct0113brain1 vdsmd_init_common.sh: libvirt: Network
> Filter Driver error : Requested operation is not valid: nwfilter is in use
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet3 -j
> libvirt-J-vnet3' failed: Illegal target name 'libvirt-J-vnet3'.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet3 -j
> libvirt-P-vnet3' failed: Illegal target name 'libvirt-P-vnet3'.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet3' failed: Chain
> 'libvirt-J-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet3' failed: Chain
> 'libvirt-P-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet3' failed: Chain
> 'libvirt-J-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet3' failed: Chain
> 'libvirt-J-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet3' failed: Chain
> 'libvirt-P-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet3' failed: Chain
> 'libvirt-P-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet3 -g FO-vnet3' failed: iptables v1.4.21: goto 'FO-vnet3'
> is not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet3 -g
> FO-vnet3' failed: iptables v1.4.21: goto 'FO-vnet3' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet3 -g
> FI-vnet3' failed: iptables v1.4.21: goto 'FI-vnet3' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet3
> -g HI-vnet3' failed: iptables v1.4.21: goto 'HI-vnet3' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged
> --physdev-out vnet3 -g FO-vnet3' failed: ip6tables v1.4.21: goto 'FO-vnet3'
> is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet3 -g
> FO-vnet3' failed: ip6tables v1.4.21: goto 'FO-vnet3' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet3 -g
> FI-vnet3' failed: ip6tables v1.4.21: goto 'FI-vnet3' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet3
> -g HI-vnet3' failed: ip6tables v1.4.21: goto 'HI-vnet3' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet3 -j
> libvirt-I-vnet3' failed: Illegal target name 'libvirt-I-vnet3'.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet3 -j
> libvirt-O-vnet3' failed: Illegal target name 'libvirt-O-vnet3'.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-I-vnet3' failed: Chain
> 'libvirt-I-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet3' failed: Chain
> 'libvirt-O-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-I-vnet3' failed: Chain
> 'libvirt-I-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-I-vnet3' failed: Chain
> 'libvirt-I-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet3' failed: Chain
> 'libvirt-O-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet3' failed: Chain
> 'libvirt-O-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet3' failed: Chain
> 'libvirt-P-vnet3' doesn't exist.
> Feb  5 11:04:49 lmorlct0113brain1 firewalld[1503]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -E libvirt-P-vnet3 libvirt-O-vnet3'
> failed: Chain 'libvirt-P-vnet3' doesn't exist.
> 
> 
> after restarting libvirt manually
> 
> [root@lmorlct0113brain1 log]# systemctl restart libvirtd
> [root@lmorlct0113brain1 log]#
> [root@lmorlct0113brain1 log]#
> [root@lmorlct0113brain1 log]# systemctl status libvirtd
> ● libvirtd.service - Virtualization daemon
>    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor
> preset: enabled)
>   Drop-In: /etc/systemd/system/libvirtd.service.d
>            └─unlimited-core.conf
>    Active: active (running) since Mon 2018-02-05 11:10:30 EST; 2s ago
>      Docs: man:libvirtd(8)
>            http://libvirt.org
>  Main PID: 19147 (libvirtd)
>    CGroup: /system.slice/libvirtd.service
>            └─19147 /usr/sbin/libvirtd --listen
> 
> Feb 05 11:10:29 lmorlct0113brain1.labs.lenovo.com systemd[1]: Starting
> Virtualization daemon...
> Feb 05 11:10:30 lmorlct0113brain1.labs.lenovo.com systemd[1]: Started
> Virtualization daemon.
> 
> vdsm seemed to stabilize as well
> 
> [root@lmorlct0113brain1 log]# systemctl status vdsmd
> ● vdsmd.service - Virtual Desktop Server Manager
>    Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Mon 2018-02-05 11:11:03 EST; 4s ago
>   Process: 19141 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
> --post-stop (code=exited, status=0/SUCCESS)
>   Process: 19339 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
> --pre-start (code=exited, status=0/SUCCESS)
>  Main PID: 19988 (vdsm)
>    CGroup: /system.slice/vdsmd.service
>            ├─19988 /usr/bin/python2 /usr/share/vdsm/vdsm
>            ├─20051 /usr/libexec/ioprocess --read-pipe-fd 39 --write-pipe-fd
> 38 --max-threads 10 --max-queued-requests...
>            ├─20061 /usr/libexec/ioprocess --read-pipe-fd 46 --write-pipe-fd
> 44 --max-threads 10 --max-queued-requests...
>            ├─20069 /usr/libexec/ioprocess --read-pipe-fd 56 --write-pipe-fd
> 55 --max-threads 10 --max-queued-requests...
>            ├─20077 /usr/libexec/ioprocess --read-pipe-fd 64 --write-pipe-fd
> 63 --max-threads 10 --max-queued-requests...
>            ├─20113 /usr/libexec/ioprocess --read-pipe-fd 75 --write-pipe-fd
> 74 --max-threads 10 --max-queued-requests...
>            ├─20239 /usr/libexec/ioprocess --read-pipe-fd 89 --write-pipe-fd
> 87 --max-threads 10 --max-queued-requests...
>            └─20264 /usr/bin/dd
> of=/rhev/data-center/5a708dff-0234-0203-023a-000000000090/mastersd/dom_md/
> inbox iflag=...
> 
> Feb 05 11:11:03 lmorlct0113brain1.labs.lenovo.com systemd[1]: Started
> Virtual Desktop Server Manager.
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN
> Not ready yet, ignoring event u'|virt...337
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN
> Not ready yet, ignoring event u'|virt...49'
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN
> Not ready yet, ignoring event u'|virt...103
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN
> Not ready yet, ignoring event u'|virt...077
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm
> throttled WARN MOM not available.
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm
> throttled WARN MOM not available, KSM stats wi...ng.
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN
> Not ready yet, ignoring event u'|virt...637
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN
> Not ready yet, ignoring event u'|virt... u'
> Feb 05 11:11:04 lmorlct0113brain1.labs.lenovo.com vdsm[19988]: vdsm vds WARN
> Not ready yet, ignoring event u'|virt..., u
> Hint: Some lines were ellipsized, use -l to show in full.

Hi Miro,
This doesn't seems to be the same issue. 
Please report a fresh bug, describing exactly what happened and how can be reproduced,was vdsmd running after the outage? 

There are 2 different issues here:
1) Was vdsmd running and operational after outage? such scenario was verified(by me), but maybe we still have a bug in such scenario. If vdsmd wasn't running after the network outage, please report for us a fresh bug with vdsmd, supervdsmd and engine logs

2) The too many tasks error, which should fixed by storage team(i don't think that any one has reported such bug for them!). If you see those errors, please file a different bug to handle those errors. 

Many thanks.


Note You need to log in before you can comment on or make changes to this bug.