Bug 503082
Summary: | IPv6 addresses do not work with bonding devices | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | redhat-bugzilla |
Component: | kernel | Assignee: | Kernel Maintainer List <kernel-maint> |
Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | medium | Docs Contact: | |
Priority: | low | ||
Version: | 15 | CC: | agospoda, itamar, jnc, kernel-maint, k.georgiou, ljozsa |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2011-10-11 19:39:20 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 195271, 972747 |
Description
redhat-bugzilla
2009-05-28 18:43:42 UTC
The updates that fixed that issue in RHEL have already been in Fedora 10 since initial release. Can you post the messages that you get? i get the same message as in the red hat bug, duplicate address detected, not just for auto config addresses, but any and all ipv6 addresses that i add get this same error and in turn get added as tentative addresses which do not function. /etc/modprobe.conf: alias bond0 bonding options bond0 mode=0 miimon=100 bond0 has eth2 and eth3 as slaves. if i pull out the network cable from one card (ie. only 1 of the 2 connections are active) when i add the ipv6 ip, all is fine and no duplicate address is detected and i can use the ip like normal, but if both connections are active in the bonding device when i add the ip, it is detected as duplicate and does not function. May 28 20:23:38 box kernel: bond0: duplicate address detected! Me too It's seems that I have a similar problem on a fresh install of fedora 10 server with strange behavior the network is unreachable seems similar to https://bugzilla.redhat.com/show_bug.cgi?id=236750 I put my config below : # cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=aragon.dr15.cnrs.fr NETWORKING_IPV6=yes IPV6_AUTOCONF=no # cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 ONBOOT=yes USERCTL=no BOOTPROTO=none IPADDR=147.210.72.195 NETMASK=255.255.255.0 GATEWAY=147.210.72.253 IPV6INIT=yes IPV6ADDR=2001:660:6101:21::8/64 IPV6_AUTOCONF=no DNS1=147.210.72.192 DNS2=147.210.72.210 DOMAIN=dr15.cnrs.fr # cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) # ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=yes # cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) # ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=yes when i restart the network I get : #dmesg bonding: bond0: Removing slave eth0 bonding: bond0: Warning: the permanent HWaddr of eth0 - 00:1d:92:a4:67:a4 - is still in use by bond0. Set the HWaddr of eth0 to a different address to avoid conflicts. bonding: bond0: releasing active interface eth0 0000:1e:00.0: eth0: changing MTU from 1500 to 1500 bonding: bond0: Removing slave eth1 bonding: bond0: releasing active interface eth1 0000:1e:00.1: eth1: changing MTU from 1500 to 1500 ADDRCONF(NETDEV_UP): bond0: link is not ready bonding: bond0: Adding slave eth0. bonding: bond0: Warning: failed to get speed and duplex from eth0, assumed to be 100Mb/sec and Full. bonding: bond0: enslaving eth0 as an active interface with an up link. ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready bonding: bond0: Adding slave eth1. bonding: bond0: Warning: failed to get speed and duplex from eth1, assumed to be 100Mb/sec and Full. bonding: bond0: enslaving eth1 as an active interface with an up link. 0000:1e:00.0: eth0: Link is Up 1000 Mbps Full Duplex, Flow Control: None 0000:1e:00.1: eth1: Link is Up 1000 Mbps Full Duplex, Flow Control: None bond0: duplicate address detected! bond0: no IPv6 routers present [root@aragon ~]# ping6 -c 1 2001:660:6101:21::1 PING 2001:660:6101:21::1(2001:660:6101:21::1) 56 data bytes 64 bytes from 2001:660:6101:21::1: icmp_seq=1 ttl=64 time=0.707 ms --- 2001:660:6101:21::1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms [root@aragon ~]# ping6 ipv6.google.com connect: Le réseau n'est pas accessible [root@camus ~]# ping6 -c 1 2001:660:6101:21::8 PING 2001:660:6101:21::8(2001:660:6101:21::8) 56 data bytes From 2001:660:6101:21::1 icmp_seq=1 Destination unreachable: Address unreachable --- 2001:660:6101:21::8 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms [root@camus ~]# ping6 -c1 ipv6.google.com PING ipv6.google.com(yo-in-x68.google.com) 56 data bytes 64 bytes from yo-in-x68.google.com: icmp_seq=1 ttl=49 time=105 ms --- ipv6.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 105.201/105.201/105.201/0.000 ms Some other informations : [root@aragon ~]# ifconfig bond0 Link encap:Ethernet HWaddr 00:1D:92:A4:67:A4 inet adr:147.210.72.195 Bcast:147.210.72.255 Masque:255.255.255.0 adr inet6: fe80::21d:92ff:fea4:67a4/64 Scope:Lien adr inet6: 2001:660:6101:21::8/64 Scope:Global UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:230937 errors:0 dropped:0 overruns:0 frame:0 TX packets:48985 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:28621644 (27.2 MiB) TX bytes:10522437 (10.0 MiB) eth0 Link encap:Ethernet HWaddr 00:1D:92:A4:67:A4 inet adr:147.210.72.195 Bcast:147.210.72.255 Masque:255.255.255.0 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:130141 errors:0 dropped:0 overruns:0 frame:0 TX packets:40654 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 RX bytes:13268834 (12.6 MiB) TX bytes:9138123 (8.7 MiB) Mémoire:dec20000-dec40000 eth1 Link encap:Ethernet HWaddr 00:1D:92:A4:67:A4 inet adr:147.210.72.195 Bcast:147.210.72.255 Masque:255.255.255.0 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:100796 errors:0 dropped:0 overruns:0 frame:0 TX packets:8331 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 RX bytes:15352810 (14.6 MiB) TX bytes:1384314 (1.3 MiB) Mémoire:dec60000-dec80000 lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:14556 errors:0 dropped:0 overruns:0 frame:0 TX packets:14556 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:4526118 (4.3 MiB) TX bytes:4526118 (4.3 MiB) # ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbag Wake-on: g Current message level: 0x00000001 (1) Link detected: yes # ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbag Wake-on: g Current message level: 0x00000001 (1) Link detected: yes This message is a reminder that Fedora 10 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 10. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '10'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 10's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 10 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping this bug still exists in fedora 12, the error message has changed slightly to specify that its an ipv6 duplicate address, but its exactly the same to reproduce: ifenslave bond0 eth0 eth1 ip addr add dev bond0 local some:ipv7:ip:here/netmask then just look at /var/log/messages: Nov 18 10:17:49 localhost kernel: bond0: IPv6 duplicate address detected! this bug STILL exists in fedora 13, probably rawhide as well, the error has again changed slightly steps to reprocude: ifenslave bond1 eth2 eth3 ip addr add dev bond1 local 2001:470:e184:a1f6::2/64 then in the error log it appears: Aug 21 13:52:24 localhost kernel: bond1: IPv6 duplicate address 2001:470:e184:a1f6::2 detected! if i then remove an interface from the bonding device: ifenslave -d bond1 eth3 delete the tentative dadfailed ip: ip addr del dev bond1 local 2001:470:e184:a1f6::2/64 then readd it: ip addr add dev bond1 local 2001:470:e184:a1f6::2/64 the ip then operates normally If we are to expect proper IPv6 adoption, this bug must be fixed and must be fixed as soon as possible, i think it deserves a higher priority I am willing to give as much information as possible to help out in fixing this issue. i would also like to point out that the IPv4 on this bonding device is unaffected, i get no duplicate ip warnings for IPv4, only IPv6 (In reply to comment #7) > Description of problem: > Simply put, if your using a bonding network device which has more than one > active device, you cannot add ipv6 addresses because they get detected as > duplicate addresses. This bug has already been fixed in RHEL; please refer to > bug #236750 That bug was replaced by bug #516985, which is still not closed. And it looks like a lot of changes were needed to fix the problem. i have tried applying the patch from that bug: linux-2.6-net-fix-lockups-and-dupe-addresses-w-bonding-and-ipv6.patch to the latest fedora kernel srpm, i had to do a very small ammount of editing to get the patch to apply correctly, but this patch most certainly does not fix the duplicate ip issue that its filename implies, i have no doubt that it fixes the lockup issue, which i am not experiencing, but it definitely does not fix this bug which i have reported this is crazy, how can one expect to deploy IPv6 when something as simple as a bonded interface cannot use IPv6, this bug is STILL not fixed in fedora 14, i reported this bug back in fedora 10 and it seems noone cares. the bug #236750 from RHEL is not the same as someone suggested. the patches do not fix this duplicate ip bug, they are for some IPv6 lockup bug. The duplicate address detection code in fedora 14 is still bugged and detects any ip you add to a bonding interface with more than 1 physical connection as duplicate. Can we please have this bug set a higher priority? with IPv4 nearly run out, this is becoming more and more of a problem. As of fedora 15 with latest kernel (kernel-2.6.38.8-32.fc15.x86_64) this still does not work, it still does not properly handle ipv6 on bonding devices The duplicate address messages that are output by the IPv6 DAD code when using round-robin mode bonding have been around for a while and are unlikely to be fixed. Follow this recent thread upstream if you are interested in this topic: http://patchwork.ozlabs.org/patch/117949/ Sorry I meant to close this originally as the community seems to agree this is not a bug. *** Bug 1008963 has been marked as a duplicate of this bug. *** |