I saw in the logs of the L3 agent that it is crashing pretty often. It happens like: 2020-05-11 23:56:06.947 136139 DEBUG neutron.agent.l3.router_info [-] removing port {'id': 'f64ca38d-dc13-4ce2-8641-9563a297c96e', 'name': '', 'network_id': 'f4aea80e-30cf-4482-91b9-c210e9a919dc', 'tenant_id': 'ff7f213ca16e4663bf2f9c629389b062', 'mac_address': 'fa:16:3e:be:53:e6', 'admin_state_up': True, 'status': 'D OWN', 'device_id': '69351713-037a-46ca-b4bb-59cdc7c37b1b', 'device_owner': 'network:router_interface_distributed', 'fixed_ips': [{'subnet_id': '803eb270-25a5-4536-9c0f-b515dda3b820', 'ip_address': '2001:db8::1', 'prefixlen': 64}], 'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'security_groups': [], 'description ': '', 'binding:vnic_type': 'normal', 'binding:profile': {}, 'binding:host_id': '', 'binding:vif_type': 'distributed', 'binding:vif_details': {}, 'qos_policy_id': None, 'port_security_enabled': False, 'dns_name': '', 'dns_assignment': [{'ip_address': '2001:db8::1', 'hostname': 'host-2001-db8--1', 'fqdn': 'host-2001-d b8--1.openstacklocal.'}], 'resource_request': None, 'ip_allocation': 'immediate', 'tags': [], 'created_at': '2020-05-11T23:52:09Z', 'updated_at': '2020-05-11T23:52:20Z', 'revision_number': 7, 'project_id': 'ff7f213ca16e4663bf2f9c629389b062', 'subnets': [{'id': '803eb270-25a5-4536-9c0f-b515dda3b820', 'cidr': '2001:db8 ::/64', 'gateway_ip': '2001:db8::1', 'dns_nameservers': [], 'ipv6_ra_mode': 'dhcpv6-stateless', 'subnetpool_id': None}], 'extra_subnets': [{'id': '0c51c948-69b0-4122-9237-a6dda322a0ef', 'cidr': '10.100.0.0/28', 'gateway_ip': '10.100.0.1', 'dns_nameservers': [], 'ipv6_ra_mode': None, 'subnetpool_id': None}], 'address_ scopes': {'4': None, '6': None}, 'mtu': 1450} from internal_ports cache _process_internal_ports /usr/lib/python3.6/site-packages/neutron/agent/l3/router_info.py:621 2020-05-11 23:56:06.948 136139 DEBUG neutron.agent.l3.router_info [-] Spawning radvd daemon in router device: 69351713-037a-46ca-b4bb-59cdc7c37b1b enable_radvd /usr/lib/python3.6/site-packages/neutron/agent/l3/router_info.py:578 2020-05-11 23:56:06.949 136139 DEBUG neutron.agent.linux.utils [-] Running command (rootwrap daemon): ['radvd-kill', '9', '255351'] execute_rootwrap_daemon /usr/lib/python3.6/site-packages/neutron/agent/linux/utils.py:103 2020-05-11 23:56:07.874 137076 DEBUG oslo.privsep.daemon [-] privsep: reply[140114300415016]: (4, True) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-11 23:56:07.962 137076 DEBUG oslo.privsep.daemon [-] privsep: reply[140114300415016]: (4, None) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-11 23:56:08.056 136139 DEBUG oslo_concurrency.lockutils [req-2068d0fc-4ef7-4f5b-97eb-3adf30628d45 - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:265 2020-05-11 23:56:08.056 136139 DEBUG oslo_concurrency.lockutils [req-2068d0fc-4ef7-4f5b-97eb-3adf30628d45 - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:281 2020-05-11 23:56:08.057 136139 INFO oslo_rootwrap.client [req-2068d0fc-4ef7-4f5b-97eb-3adf30628d45 - - - - -] Stopping rootwrap daemon process with pid=207658 2020-05-11 23:56:08.062 136139 CRITICAL neutron [req-2068d0fc-4ef7-4f5b-97eb-3adf30628d45 - - - - -] Unhandled error: RuntimeError: Second simultaneous read on fileno 16 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_prevent_multiple_readers(False) - MY THREAD=<built-in method switch of greenlet.greenlet object at 0x7f6ef135f6d0>; THAT THREAD=FdListener('read', 16, <built-in method switch of greenlet.greenlet object at 0x7f6ee71aa930>, <built-in method throw of greenlet.greenlet object at 0x7f6ee71aa930>) 2020-05-11 23:56:08.062 136139 ERROR neutron Traceback (most recent call last): 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib64/python3.6/weakref.py", line 624, in _exitfunc 2020-05-11 23:56:08.062 136139 ERROR neutron f() 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib64/python3.6/weakref.py", line 548, in __call__ 2020-05-11 23:56:08.062 136139 ERROR neutron return info.func(*info.args, **(info.kwargs or {})) 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/oslo_rootwrap/client.py", line 103, in _shutdown 2020-05-11 23:56:08.062 136139 ERROR neutron manager.rootwrap().shutdown() 2020-05-11 23:56:08.062 136139 ERROR neutron File "<string>", line 2, in shutdown 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib64/python3.6/multiprocessing/managers.py", line 757, in _callmethod 2020-05-11 23:56:08.062 136139 ERROR neutron kind, result = conn.recv() 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/oslo_rootwrap/jsonrpc.py", line 132, in recv 2020-05-11 23:56:08.062 136139 ERROR neutron s = self.recv_bytes() 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/oslo_rootwrap/jsonrpc.py", line 121, in recv_bytes 2020-05-11 23:56:08.062 136139 ERROR neutron l = struct.unpack('!Q', self.recvall(8))[0] 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/oslo_rootwrap/jsonrpc.py", line 148, in _recvall_slow 2020-05-11 23:56:08.062 136139 ERROR neutron piece = self._socket.recv(remaining) 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 366, in recv 2020-05-11 23:56:08.062 136139 ERROR neutron return self._recv_loop(self.fd.recv, b'', bufsize, flags) 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 360, in _recv_loop 2020-05-11 23:56:08.062 136139 ERROR neutron self._read_trampoline() 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 331, in _read_trampoline 2020-05-11 23:56:08.062 136139 ERROR neutron timeout_exc=socket_timeout('timed out')) 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 210, in _trampoline 2020-05-11 23:56:08.062 136139 ERROR neutron mark_as_closed=self._mark_as_closed) 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/eventlet/hubs/__init__.py", line 155, in trampoline 2020-05-11 23:56:08.062 136139 ERROR neutron listener = hub.add(hub.READ, fileno, current.switch, current.throw, mark_as_closed) 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/eventlet/hubs/epolls.py", line 22, in add 2020-05-11 23:56:08.062 136139 ERROR neutron listener = hub.BaseHub.add(self, evtype, fileno, cb, tb, mac) 2020-05-11 23:56:08.062 136139 ERROR neutron File "/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 181, in add 2020-05-11 23:56:08.062 136139 ERROR neutron evtype, fileno, evtype, cb, bucket[fileno])) 2020-05-11 23:56:08.062 136139 ERROR neutron RuntimeError: Second simultaneous read on fileno 16 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_prevent_multiple_readers(False) - MY THREAD=<built-in method switch of greenlet.greenlet object at 0x7f6ef135f6d0>; THAT THREAD=FdListener('read', 16, <built-in method switch of greenlet.greenlet object at 0x7f6ee71aa930>, <built-in method throw of greenlet.greenlet object at 0x7f6ee71aa930>) 2020-05-11 23:56:08.062 136139 ERROR neutron 2020-05-11 23:56:12.871 295400 INFO neutron.common.config [-] Logging enabled! 2020-05-11 23:56:12.872 295400 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 15.0.3.dev79
If this bug requires doc text for errata release, please set the 'Doc Type' and provide draft text according to the template in the 'Doc Text' field. The documentation team will review, edit, and approve the text. If this bug does not require doc text, please set the 'requires_doc_text' flag to '-'.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3148