Bug 1703126 - neutron-dhcp-agent error on undercloud
Summary: neutron-dhcp-agent error on undercloud
Keywords:
Status: CLOSED DUPLICATE of bug 1700883
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Assaf Muller
QA Contact: Roee Agiman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-25 15:09 UTC by Alistair Tonner
Modified: 2019-04-25 16:44 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-25 16:44:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alistair Tonner 2019-04-25 15:09:28 UTC
Description of problem:

neutron dhcp-agent container fails:

2019-04-25 09:20:09.365 62305 DEBUG neutron.agent.linux.utils [req-f10aad8f-9d3e-44a0-bc6c-7ad370f2fa97 - - - - -] Running command (rootwrap daemon): ['ip', 'netns', 'exec', 'qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c',
'dnsmasq', '--no-hosts', '--no-resolv', '--pid-file=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/pid', '--dhcp-hostsfile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/host', '--addn-hosts=/var/
lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/addn_hosts', '--dhcp-optsfile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/opts', '--dhcp-leasefile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253
efb97c/leases', '--dhcp-match=set:ipxe,175', '--dhcp-userclass=set:ipxe6,iPXE', '--local-service', '--bind-interfaces', '--dhcp-range=set:tag0,192.168.24.0,static,255.255.255.0,86400s', '--dhcp-option-force=option:mtu,15
00', '--dhcp-lease-max=256', '--conf-file=', '--domain=localdomain'] execute_rootwrap_daemon /usr/lib/python3.6/site-packages/neutron/agent/linux/utils.py:103
2019-04-25 09:20:09.807 62305 ERROR neutron.agent.linux.utils [req-f10aad8f-9d3e-44a0-bc6c-7ad370f2fa97 - - - - -] Exit code: 125; Stdin: ; Stdout: Starting a new child container neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-
800a-27f253efb97c
; Stderr: + export DOCKER_HOST=
+ DOCKER_HOST=
+ ARGS='--no-hosts --no-resolv --pid-file=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/host --addn-hosts=/var/lib/neutron/dhcp
/33ec6fde-5f03-4da8-800a-27f253efb97c/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/opts --dhcp-leasefile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/leases --dhcp-m
atch=set:ipxe,175 --dhcp-userclass=set:ipxe6,iPXE --local-service --bind-interfaces --dhcp-range=set:tag0,192.168.24.0,static,255.255.255.0,86400s --dhcp-option-force=option:mtu,1500 --dhcp-lease-max=256 --conf-file= --d
omain=localdomain'
++ ip netns identify
+ NETNS=qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c
+ NAME=neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c
+ CLI='nsenter --net=/run/netns/qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c --preserve-credentials -m -t 1 podman'
+ LOGGING='--log-driver json-file --log-opt path=/var/log/containers/stdouts/neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c.log'
+ CMD='/usr/sbin/dnsmasq -k'
++ nsenter --net=/run/netns/qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c --preserve-credentials -m -t 1 podman ps -a --filter name=neutron-dnsmasq- --format '{{.ID}}:{{.Names}}:{{.Status}}'
++ awk '{print $1}'
+ LIST=295e0e250762:neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c:Up
++ printf '%s\n' 295e0e250762:neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c:Up
++ grep -E ':(Exited|Created)'
+ ORPHANTS=
+ '[' -n '' ']'
+ printf '%s\n' 295e0e250762:neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c:Up
+ grep -q 'neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c$'
+ echo 'Starting a new child container neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c'
+ nsenter --net=/run/netns/qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c --preserve-credentials -m -t 1 podman run --detach --log-driver json-file --log-opt path=/var/log/containers/stdouts/neutron-dnsmasq-qdhcp-33ec6fde-5f
03-4da8-800a-27f253efb97c.log -v /var/lib/config-data/puppet-generated/neutron/etc/neutron:/etc/neutron:ro -v /run/netns:/run/netns:shared -v /var/lib/neutron:/var/lib/neutron:z,shared -v /dev/log:/dev/log --net host --p
id host --privileged -u root --name neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c 192.168.24.1:8787/rhosp15/openstack-neutron-dhcp-agent:20190423.1 /usr/sbin/dnsmasq -k --no-hosts --no-resolv --pid-file=/var
/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/host --addn-hosts=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/addn_hos
ts --dhcp-optsfile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/opts --dhcp-leasefile=/var/lib/neutron/dhcp/33ec6fde-5f03-4da8-800a-27f253efb97c/leases --dhcp-match=set:ipxe,175 --dhcp-userclass=set:ipxe6,i
PXE --local-service --bind-interfaces --dhcp-range=set:tag0,192.168.24.0,static,255.255.255.0,86400s --dhcp-option-force=option:mtu,1500 --dhcp-lease-max=256 --conf-file= --domain=localdomain
error creating container storage: the container name "neutron-dnsmasq-qdhcp-33ec6fde-5f03-4da8-800a-27f253efb97c" is already in use by "295e0e25076234e92cb89bae4b547c9ab0081a86323cd773e90305669d0b6847". You have to remov
e that container to be able to reuse that name.: that name is already in use


Version-Release number of selected component (if applicable):

RHEL8 -> 1855
RHOS -> RHOS_TRUNK-15.0-RHEL-8-20190423.n.1

ansible.noarch                                2.8.0-0.7.b1.el8ae                                   @rhosp-rhel-8.0-ansible
ansible-pacemaker.noarch                      1.0.4-0.20190418190349.0e4d7c0.el8ost                @rhelosp-15.0-trunk
ansible-role-atos-hsm.noarch                  0.1.1-0.20190422121159.1518dbd.el8ost                @rhelosp-15.0-trunk
ansible-role-chrony.noarch                    0.0.1-0.20190422110347.bece846.el8ost                @rhelosp-15.0-trunk
ansible-role-container-registry.noarch        1.0.1-0.20190422120345.1aee1a7.el8ost                @rhelosp-15.0-trunk
ansible-role-redhat-subscription.noarch       1.0.3-0.20190422120345.36b3639.el8ost                @rhelosp-15.0-trunk
ansible-role-thales-hsm.noarch                0.2.1-0.20190422121359.9019dde.el8ost                @rhelosp-15.0-trunk
ansible-role-tripleo-modify-image.noarch      1.0.1-0.20190422122515.f1dfdc6.el8ost                @rhelosp-15.0-trunk
ansible-tripleo-ipsec.noarch                  9.1.1-0.20190422122014.8c1fdab.el8ost                @rhelosp-15.0-trunk
ceph-ansible.noarch                           4.0.0-0.rc3.10.ga718ddec.el8cp                       @ceph-4.0-rhel-8
puppet-ironic.noarch                          14.4.1-0.20190420120354.f4220d1.el8ost               @rhelosp-15.0-trunk
puppet-neutron.noarch                         14.4.1-0.20190420042323.400fd54.el8ost               @rhelosp-15.0-trunk
puppet-nova.noarch                            14.4.1-0.20190420020349.2f25086.el8ost               @rhelosp-15.0-trunk
python3-heat-agent-ansible.noarch             1.8.1-0.20190420021506.a95e9be.el8ost                @rhelosp-15.0-trunk
python3-ironic-inspector-client.noarch        3.5.0-0.20190313131319.9bb1150.el8ost                @rhelosp-15.0-trunk
python3-ironicclient.noarch                   2.7.0-0.20190312102843.4af8a79.el8ost                @rhelosp-15.0-trunk
python3-neutron-lib.noarch                    1.25.0-0.20190312185238.fc2a810.el8ost               @rhelosp-15.0-trunk
python3-neutronclient.noarch                  6.12.0-0.20190312100012.680b417.el8ost               @rhelosp-15.0-trunk
python3-novaclient.noarch                     1:13.0.0-0.20190416130354.62bf880.el8ost             @rhelosp-15.0-trunk




Steps to Reproduce:
1.  Run attached script to deploy on host
2.
3.

Actual results:

openstack deployment fails due to undercloud-0 going to 100% cpu and disconnecting.


Expected results:

openstack deployment should complete

Additional info:

Comment 3 Bernard Cafarelli 2019-04-25 16:44:28 UTC
It happens on a sidecar container, but the error is from generic issue in bug #1700883
Marking as duplicate

*** This bug has been marked as a duplicate of bug 1700883 ***


Note You need to log in before you can comment on or make changes to this bug.