Bug 1249210 - rhel-osp-director: Single controller overcloud deployment - neutron-l3-agent is down.
Summary: rhel-osp-director: Single controller overcloud deployment - neutron-l3-agent ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: unspecified
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 10.0 (Newton)
Assignee: Brent Eagles
QA Contact: Shai Revivo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-31 19:42 UTC by Alexander Chuzhoy
Modified: 2018-05-02 10:49 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
A timing issue sometimes causes Overcloud neutron services to not automatically start correctly. This means instances are not accessible. As a workaround, you can run the following command on the Controller node cluster: $ sudo pcs resource debug-start neutron-l3-agent Instances will work correctly.
Clone Of:
Environment:
Last Closed: 2016-10-14 17:05:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
neutron conf and logs (192.09 KB, application/x-gzip)
2015-07-31 19:47 UTC, Alexander Chuzhoy
no flags Details

Description Alexander Chuzhoy 2015-07-31 19:42:52 UTC
rhel-osp-director: Single controller overcloud deployment - neutron-l3-agent is down.

Environment:


Steps to reproduce:
1. Deploy undercloud.
2. Deploy overcloud with: 1 controller, 1 compute, 1 ceph.
3. Check the status of neutron-l3-agent


Result:
[root@overcloud-controller-0 ~]# pcs resource|grep -B2 -i Stop
     Started: [ overcloud-controller-0 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Stopped: [ overcloud-controller-0 ]
--
     Started: [ overcloud-controller-0 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Stopped: [ overcloud-controller-0 ]
--
     Started: [ overcloud-controller-0 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Stopped: [ overcloud-controller-0 ]

One symptom is inability to communicate with the launched instances.

Expected result:
No stopped resources.


Note:
The W/A I applied was
"pcs resource debug-start neutron-l3-agent"

After that was able to launch instances and communicate with them.

Comment 3 Alexander Chuzhoy 2015-07-31 19:44:30 UTC
Environment:
python-neutron-lbaas-2015.1.0-5.el7ost.noarch
openstack-neutron-ml2-2015.1.0-12.el7ost.noarch
python-neutron-2015.1.0-12.el7ost.noarch
python-neutronclient-2.4.0-1.el7ost.noarch
openstack-neutron-common-2015.1.0-12.el7ost.noarch
openstack-neutron-lbaas-2015.1.0-5.el7ost.noarch
openstack-neutron-2015.1.0-12.el7ost.noarch
openstack-neutron-metering-agent-2015.1.0-12.el7ost.noarch
openstack-neutron-openvswitch-2015.1.0-12.el7ost.noarch
instack-undercloud-2.1.2-22.el7ost.noarch

Comment 4 Alexander Chuzhoy 2015-07-31 19:47:44 UTC
Created attachment 1058127 [details]
neutron conf and logs

Comment 6 Mike Burns 2016-04-07 20:47:27 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 9 Brent Eagles 2016-10-11 19:06:30 UTC
The L3 agent is no longer a pacemaker managed service. Is this issue actually relevant for OSP-10 or should it be retargeted to OSP-9?

Comment 10 Mike Burns 2016-10-12 11:04:53 UTC
I suspect this is closed currentrelease, but we should ask QE to test.  If you think this should be explicitly tested on both 9 and 10, then let's do that.

Comment 11 Assaf Muller 2016-10-14 17:05:32 UTC
Please re-open with relevant OSP version if needed.

Comment 12 Amit Ugol 2018-05-02 10:49:04 UTC
closed, no need for needinfo.


Note You need to log in before you can comment on or make changes to this bug.