Bug 227518 - ICMPv6 NA: someone advertises our address on bond0!
ICMPv6 NA: someone advertises our address on bond0!
Status: CLOSED DUPLICATE of bug 249631
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel (Show other bugs)
4.2
All Linux
medium Severity medium
: ---
: ---
Assigned To: Andy Gospodarek
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-02-06 11:37 EST by John DeFranco
Modified: 2014-06-29 18:58 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-04-07 11:16:42 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description John DeFranco 2007-02-06 11:37:29 EST
Description of problem:

With ipv6 addresses configured with bonding any traffic generates the message:

 ICMPv6 NA: someone advertises our address on bond0!

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Setup two systems with a bond and ipv6 addresses
2. Ping over this interface from one node to another
3. Check syslog and dmesg.
  
Actual results:


Expected results:


Additional info:

Happens on x86, x86_64 and ipf architectures. Problem reported against r4u2 but
also happens on all other updates as well. Current configration I have setup:

Linux pipercub.cup.hp.com 2.6.9-22.ELsmp #1 SMP Mon Sep 19 18:00:54 EDT 2005
x86_64 x86_64 x86_64 GNU/Linux
[root@pipercub ~]# cat /etc/redhat-release
Red Hat Enterprise Linux ES release 4 (Nahant Update 2)

Ethernet driver: tg3.
ifconfig configuration:

[root@pipercub ~]# ifconfig -a
bond0     Link encap:Ethernet  HWaddr 00:13:21:78:85:2C  
          inet addr:192.168.1.4  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          inet6 addr: fec0:1000:0:a801::c0a8:4/64 Scope:Site
          inet6 addr: 3ffe:1000:0:a801::c0a8:4/64 Scope:Global
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:997311 errors:0 dropped:0 overruns:0 frame:0
          TX packets:808447 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:827358395 (789.0 MiB)  TX bytes:151605285 (144.5 MiB)

eth2      Link encap:Ethernet  HWaddr 00:13:21:78:85:2C  
          inet6 addr: fe80::213:21ff:fe78:852c/64 Scope:Link
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:984977 errors:0 dropped:0 overruns:0 frame:0
          TX packets:808447 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:825072887 (786.8 MiB)  TX bytes:151602549 (144.5 MiB)
          Base address:0x5000 Memory:fdee0000-fdf00000 

eth3      Link encap:Ethernet  HWaddr 00:13:21:78:85:2C  
          inet6 addr: fe80::213:21ff:fe78:852c/64 Scope:Link
          UP BROADCAST RUNNING NOARP SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:12361 errors:0 dropped:0 overruns:0 frame:0
          TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2287842 (2.1 MiB)  TX bytes:5046 (4.9 KiB)
          Base address:0x5040 Memory:fde60000-fde80000 

[root@pipercub ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v2.6.1 (October 29, 2004)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:21:78:85:2c

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:21:78:85:2d
[root@pipercub ~]#
Comment 1 Martin Nagy 2008-01-02 08:08:25 EST
This could be caused by a misconfiguration. If not, however, I don't think it is
a big issue. I can't reproduce on RHEL4.6:
Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005)             
Can you still reproduce this on 4.6?
Comment 2 RHEL Product and Program Management 2008-02-01 14:09:56 EST
This request was evaluated by Red Hat Product Management for
inclusion, but this component is not scheduled to be updated in
the current Red Hat Enterprise Linux release. If you would like
this request to be reviewed for the next minor release, ask your
support representative to set the next rhel-x.y flag to "?".
Comment 3 John DeFranco 2008-02-08 12:07:34 EST
I just checked on a rh4.6 system and I don't see the messages. I'm not sure if
this means its 'fixed', went away or something else. You mention that it could
be caused by a misconfiguration. Could you outline the steps you used to
configure the bond so I can compare it to what we do?
Comment 4 Martin Nagy 2008-02-12 06:52:05 EST
I just set up two interfaces on two machines connected together, set up ipv6
addresses, added routing table entries so that the pinging worked and then
enslaved the interfaces on both machines and did a ping through the bond0
interface. No message. BTW, I think this bug should be filed against the kernel.
I'll try to get a hold of some kernel developer competent in this area and ask
him if he knows whether this issue was fixed in 4.6 (because there were a lot of
patches in kernel for bonding) or isn't a bug at all.
Comment 5 Martin Nagy 2008-02-12 10:09:03 EST
Changing component to kernel and reassigning to Andy.
Comment 6 Andy Gospodarek 2008-04-07 11:16:42 EDT

*** This bug has been marked as a duplicate of 249631 ***

Note You need to log in before you can comment on or make changes to this bug.