Bug 1474025
Summary: | No connectivity to An instance Floating IP after Restarting the compute node | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Itzik Brown <itbrown> |
Component: | opendaylight | Assignee: | Sridhar Gaddam <sgaddam> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Itzik Brown <itbrown> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 12.0 (Pike) | CC: | itbrown, mkolesni, nyechiel |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | 12.0 (Pike) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: |
N/A
|
|
Last Closed: | 2017-12-14 09:45:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Itzik Brown
2017-07-23 08:29:49 UTC
On debugging this issue closely, it appears like some race condition in ODL Controller. Steps to Reproduce: 1. Restart the compute node 2. Launch an instance on the compute node 3. You can observe that the instance initially stays in "spawning" state and then transitions to "error" state. 4. Restart the openvswitch on the compute node 5. Launch a new instance and it would boot successfully. Basically, when we issue the reboot on the compute node, ODL identifies that the node is idle and triggers the disconnection chain. But, while this is going on, when the Compute node comes up, we could see that there is a race condition between the cleanup events and the events related to the node reconciliation. In this process, we could see that finally the Compute node is deleted from the operational store [#] eventhough its connected to the controller. Since the node info is deleted from the datastore, the sideeffect is that port-binding fails and we will be unable to spawn new VMs until we restart the OVS Switch on the Compute node. Following[@] is a SNAP of the karaf logs which show this sequence. Some additional notes: Incase, the compute node comes up with some delay (i.e., after the cleanup is properly done in ODL) this issue (i.e., step3 above) is not seen. [#] 2017-08-01 07:48:16,660 | INFO | lt-dispatcher-49 | OvsdbConnectionManager | 289 - org.opendaylight.ovsdb.southbound-impl - 1.4.1.Carbon-redhat-1 | Entity{type='ovsdb', id=/(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)network-topology/topology/topology[{(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)topology-id=ovsdb:1}]/node/node[{(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)node-id=ovsdb://uuid/e9806896-8dc2-4f17-83ea-c1c957608915}]} has no owner, cleaning up the operational data store [@] https://gist.github.com/sridhargaddam/3761ef080e11f2dd2429c8d7016ae6d0 Checked with opendaylight-6.2.0-0.1.20170921snap729.el7.noarch. After the compute node restart, launching again the instance - I get an IP. The problem now is that I don't have connectivity to the instance's FIP. Opening a new bug for the FIP issue. |