Description of problem:
Starting with GlusterFS 3.4, glusterfsd uses the IANA defined ephemeral port range (49152 and upward). If you happen to use the same network for storage and qemu-kvm live migration, sometimes you get a port conflict, and live migration aborts
Here's a log of a failed live migration on the destination host:
2013-07-23 15:54:32.619+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name ipasserelle QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name ipasserelle -S -M-M rhel6.4.0 -enable-kvm -m 20482048 -smp-smp 2,sockets=2,cores=1,threads=1 -uuid 8505958b-8227-0a46-91a7-41d3247544e2 -nodefconfig-nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ipasserelle.monitor,server,nowait -mon-mon chardev=charmonitor,id=monitor,mode=controlchardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/gluster/ipasserelle.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=nonefile=/var/lib/libvirt/images/gluster/ipasserelle.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=rawif=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev-netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:2b:14:d7,bus=pci.0,addr=0x3 -netdev-netdev tap,fd=24,id=hostnet1,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:6d:4f:52,bus=pci.0,addr=0x4 -chardev pty,id=charserial0pty,id=charserial0 -device-device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc-vnc 127.0.0.1:0127.0.0.1:0 -vga cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x5 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -incoming tcp:[::]:49152 -device-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
char device redirected to /dev/pts/2
inet_listen_opts: bind(ipv6,::,49152): Address already in use
Migrate: Failed to bind socket
Migration failed. Exit code tcp:[::]:49152(-1), exiting.
2013-07-23 15:54:33.016+0000: shutting down
[root@dd9 ~]# netstat -laputen | grep :49152
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 0 82349 1927/glusterfsd
tcp 0 0 127.0.0.1:1015 127.0.0.1:49152 ESTABLISHED 0 82555 1952/glusterfs
tcp 0 0 10.90.25.138:49152 10.90.25.137:1016 ESTABLISHED 0 82473 1927/glusterfsd
tcp 0 0 10.90.25.138:1021 10.90.25.137:49152 ESTABLISHED 0 82344 1952/glusterfs
tcp 0 0 127.0.0.1:49152 127.0.0.1:1008 ESTABLISHED 0 82725 1927/glusterfsd
tcp 0 0 127.0.0.1:49152 127.0.0.1:1015 ESTABLISHED 0 82556 1927/glusterfsd
tcp 0 0 10.90.25.138:49152 10.90.25.137:1010 ESTABLISHED 0 89092 1927/glusterfsd
tcp 0 0 127.0.0.1:1008 127.0.0.1:49152 ESTABLISHED 0 82724 2069/glusterfs
tcp 0 0 10.90.25.138:1018 10.90.25.137:49152 ESTABLISHED 0 82784 2115/glusterfs
The exact same setup with GlusterFS 3.3.2 is working like a charm
Version-Release number of selected component (if applicable):
Host is CentOS 6.4 x86_64
gluster 3.4.0-2 (glusterfs glusterfs-server glusterfs-fuse), from the gluster.org RHEL repo
Not always, but frequently enough
Steps to Reproduce:
- Two hosts with a replicated glusterFS volume (both are gluster server and client)
- Libvirt on both nodes
- One private network used for gluster and live migration
- while glusterFS is working, try to live migrate a qemu-kvm VM, using the standard migration (virsh migrate --live vm qemu+ssh://user@other_node/system)
- From time to time (not always), the migration will fail because the qemu process on the destination host cannot bind to the choosed port
Live migration fails
Live migration shouldn't be bothered by Gluster
An option to configure the first port, or the port range used by Gluster would avoid this situation
Just one more info: I have three GlusterFS volumes between the two nodes, and the first three migrations fail.
As qemu (or libvirt, not sure which one chooses the incomming migration port) increment the port number at each migration attempt, the fourth migration succeed (and the following migrations succeed too)
We just hit this bug in a new setup today. Verifying this still exists.
CentOS release 6.4 (Final)
Linux SERVERNAME 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 17:19:38 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Same problem with oVirt 3.3 and fedora 19 as Hypervisors.
the range 49152-49215 already used by libvirt years before the Gluster change from 3.3 to 3.4...
How could you miss it and worse not able to change at least for 3.4.1 as this bugzilla was opened in July?
At least could you provide a way to configure gluster to use another range so that if two nodes are both servers and client they can use another range?
You are limiting GlusterFS adoption itself as noone would implement oVirt on GlusterFS without migration available...
Thanks for reading
Out of curiosity, why isn't this a bug in qemu-kvm? Shouldn't qemu-kvm be trying another port if 49152 (or any other port) is in use? And using portmapper to register the port it does end up using?
REVIEW: http://review.gluster.org/6076 (xlators/mgmt/glusterd: ports conflict with qemu live migration) posted (#1) for review on release-3.4 by Kaleb KEITHLEY (email@example.com)
Dynamic, private or ephemeral ports
The range 49152–65535 (215+214 to 216−1) – above the registered ports – contains dynamic or private ports that cannot be registered with IANA. This range is used for custom or temporary purposes and for automatic allocation of ephemeral ports.
If a new projyect starts to use a range, in my opinion has to consider that it is not the only project in the world and/or for the future.... ;-)
Why libvirt and GlusterFS could not reserve via IANA so that /etc/services could be updated and other projects before set new range can query current status?
It seems very like the 192.168.1.x private network used by every one.
Latest reserved port 49151 and so why not? Start at 49152... ;-)
There are quite several ranges up to 65535 or not?
Just my two eurocent
49152-65535 cannot be registered with IANA
why not try any range below 49151 that is still free or ask to IANA to extend or at least coordinate in an way to not overlap?
Anand Avati Oct 11 9:09 PM
Patch Set 1: Code-Review-1
Technically this change looks OK. I'm just concerned about the confusion this is going to cause to users who need to open up their firewalls. There is still confusion about what ports and what ranges need to be opened for 3.3 vs 3.4, and this change is only going to add to the confusion (worse if we get it into 3.4.x branch)
Niels de Vos Oct 14 12:50 PM
Patch Set 1:
Instead of changing the defaults, is it acceptable to make the start port a configurable option in /etc/glusterd/glusterd.vol or similar?
Vijay Bellur Oct 14 1:28 PM
Patch Set 1:
Providing a configurable start value for brick ports would be ideal here.
RFC 6335 is very clear about this:
8.1.2. Variances for Specific Port Number Ranges
o Ports in the Dynamic Ports range (49152-65535) have been
specifically set aside for local and dynamic use and cannot be
assigned through IANA. Application software may simply use any
dynamic port that is available on the local host, without any sort
of assignment. On the other hand, application software MUST NOT
assume that a specific port number in the Dynamic Ports range will
always be available for communication at all times...
This is clearly a libvirt bug. Libvirt needs to handle the case when a port is not available. It cannot assume that it can exclusively use ports starting at 49152.
See Red Hat bugs 1018530, 1018696, and 1019237.
Re-opening so that we can consider providing a workaround for libvirt and other applications that cause this issue.
I am of the opinion that Gluster should not change the default port (49152), but add an option so that users can specify a different base-port if they need to.
An implementation from Kaleb that add an option in /etc/glusterd/glusterd.vol:
Comments in this Bug or the above Gerrit link are very welcome :)
REVIEW: http://review.gluster.org/6147 (mgmt/glusterd: add option to specify a different base-port) posted (#3) for review on release-3.4 by Kaleb KEITHLEY (firstname.lastname@example.org)
COMMIT: http://review.gluster.org/6147 committed in release-3.4 by Vijay Bellur (email@example.com)
Author: Kaleb S. KEITHLEY <firstname.lastname@example.org>
Date: Fri Oct 25 09:05:18 2013 -0400
mgmt/glusterd: add option to specify a different base-port
This is (arguably) a hack to work around a bug in libvirt which is not
well behaved wrt to using TCP ports in the unreserved space between
49152-65535. (See RFC 6335)
Normally glusterd starts and binds to the first available port in range,
usually 49152. libvirt's live migration also tries to use ports in this
range, but has no fallback to use (an)other port(s) when the one it wants
is already in use.
libvirt cannot fix this in time for their impending release. This is
submitted to gerrit to provide some minimal visibility upstream to justify
hacking this change (as a temporary patch) into the glusterfs-3.4.1 RPMs
for Fedora 18-21 until libvirt can fix their implementation.
Signed-off-by: Kaleb S. KEITHLEY <email@example.com>
Reviewed-by: Niels de Vos <firstname.lastname@example.org>
Tested-by: Niels de Vos <email@example.com>
Tested-by: Gluster Build System <firstname.lastname@example.org>
Reviewed-by: Raghavan Pichai <email@example.com>
Merged in the master (http://review.gluster.org/6210, bug 1018178) and release-3.4 branch.
does comment#13 mean that this enhancement will be in upcoming 3.4.2 release?
When is it expected to be out? Any release schedule link?
(In reply to Gianluca Cecchi from comment #14)
> does comment#13 mean that this enhancement will be in upcoming 3.4.2 release?
Yes, when a new release is done, this option will be available.
> When is it expected to be out? Any release schedule link?
It's a little out-of-date, and does not contain a reference to 3.4.2:
Most recent communication went to the gluster-devel mailinglist:
Packages are available for testing:
> I upgraded to gluster 3.4.2qa4 (see above).
> VM still worked fine, bonnie++ tests from inside the VM instances showing
> similar results than before
> but than I hit the 987555 bug again
The change for that bug introduces an option to the
/etc/glusterfs/gluster.vol configuration file. You can now add the
following line to that file:
option base-port 50152
By default this is commented out with the default port (49152). In the
line above. 50152 is just an example, you can pick any port you like.
GlusterFS tries to detect if a port is in use, if it is, it'll try the
next one (and so on).
Also note that QEMU had a fix for this as well. With the right version
of QEMU, there should be no need to change this option from the default.
Details on the fixes for QEMU are referenced in Bug 1019053.
Can you let us know if setting this option and restarting all the
glusterfsd processes helps?
Verified by Bernhard Glomm:
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.4.3, please reopen this bug report.
glusterfs-3.4.3 has been announced on the Gluster Developers mailinglist , packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.
The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.4.3. In the same line the recent release i.e. glusterfs-3.5.0  likely to have the fix. You can verify this by reading the comments in this bug report and checking for comments mentioning "committed in release-3.5".