Bug 1478061 - An instance doesn't get an IP after deployment
An instance doesn't get an IP after deployment
Product: Red Hat OpenStack
Classification: Red Hat
Component: opendaylight (Show other bugs)
12.0 (Pike)
Unspecified Unspecified
high Severity high
: beta
: 13.0 (Queens)
Assigned To: Sridhar Gaddam
Itzik Brown
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2017-08-03 09:54 EDT by Itzik Brown
Modified: 2018-04-10 08:15 EDT (History)
5 users (show)

See Also:
Fixed In Version: opendaylight-8.0.0-1.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
OpenDaylight Bug 8926 None None None 2017-08-03 10:03 EDT
OpenDaylight gerrit 61995 None None None 2017-08-18 13:26 EDT

  None (edit)
Description Itzik Brown 2017-08-03 09:54:01 EDT
Description of problem:
After deployment when launching an instance - it doesn't get an IP.
When launching a second instance on the same node - it gets and IP.
Then when rebooting the first one - it gets an IP

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.Deploy an overcloud
2.Launch an instance.
3.Open The proper security group rules
4.Verify that it doesn't get an IP (can be pinged from the DHCP name space).
5.Launch an instance on the same node as the first one and make sure it gets an IP.

Actual results:

Expected results:

Additional info:
Comment 5 Sridhar Gaddam 2017-08-18 13:25:22 EDT
In a fresh multinode deployment with Controller node running
ODL + dhcp-agent and a Compute node, when we spawn a first VM
on the compute node, it was seen that VM does not acquire the
IPAddress. On debugging, it turns out that the remote broadcast
group entries were not programmed on the Compute node. 

Setup details:
1. Multi-node with Controller and a Compute node.
2. Create a tenant neutron network with an IPv4 subnet.
3. Create a neutron router.
4. Associate the ipv4 subnet to the neutron router.

At this stage, you can see that there is no tunnel between
Controller node and Compute node.

5. Now spawn a VM on the Compute node (you can explicitly
   specify that VM has to be spawned on the compute node
   by passing --availability-zone to nova boot command).

When the VM is spawned, following would be the sequence
of events.

t1: Nova creates a tap interface for the VM, this translates
    to an add event for elanInterface (i.e., ElanInterfaceStateChangeListener
    is invoked, and addElanInterface gets processed)
t2: In addElanInterface, elanManager checks if the interface
    is part of existingElanDpnInterfaces (i.e., DpnInterfaces YANG model)
t3: Since its a new interface, it invokes createElanInterfacesList()
    which would update the DpnInterfaces model. At this stage, the
    transaction/information is still not comitted to the datastore.
t4: The processing continues to installEntriesForFirstInterfaceonDpn(),
    where we try to program the local/remote BC Group entries. 
    In this API, we have an explicit sleep for (300 + 300) ms and when we
    try to query the getEgressActionsForInterface (which is an API in GENIUS).
    GENIUS returns an empty list with the following reason - "Interface
    information not present in oper DS for the tunnel interface".
t5: So the remote BC Group does not include the actions to send the
    packets over the tunnel interface at this stage.
t6: addElanInterface processing continues further and we commit the
    transaction (i.e., DpnInterfaces model is now updated in the datastore).

While t1 to t6 is going on, in parallel, auto-tunnel code in GENIUS
creates the tunnel interfaces.

A1: A tunnel interface is created on the Compute node. When the tunnel interface
    state is up, TunnelsState YANG model is updated in GENIUS (ItmTunnelStateUpdateHelper).
A2: A notification is received in ElanTunnelInterfaceStateListener, which
    is handled in the following api - handleInternalTunnelStateEvent.
A3: In this API, when we query the ElanDpnInterfaces it only includes
    the DPNInfo of the Controller and not the Compute node (because of
    the delay in updating the model step t3-t6 above) 
A4: Due to this, in handleInternalTunnelStateEvent, we do not invoke
    setupElanBroadcastGroups() to program the Remote Group entries
    on the Compute node and the remote Broadcast Group entries on
    the Compute node never get updated.

So, the fix is, not to delay the updation of model (i.e., DpnInterfaces
until step t6) since this information is used while processing

The following patch would address this issue.
Comment 6 Sridhar Gaddam 2017-08-28 02:49:38 EDT
Upstream patch is now merged.
Comment 11 Itzik Brown 2018-03-18 07:23:45 EDT
Checked with:

Note You need to log in before you can comment on or make changes to this bug.