With libvirt-0.9.10-1.fc17.x86_64, if I stop or restart libvirtd the qemu-kvm processes for VMs or the dnsmasq processes for networks get killed i.e. $> systemd-cgls /system/libvirtd.service /system/libvirtd.service: ├ 1274 libvirtd --daemon ├ 1358 /usr/sbin/dnsmasq ... └ 5989 /usr/bin/qemu-kvm ... $> systemctl stop /system/libvirtd.service $> systemd-cgls /system/libvirtd.service (nothing) Dan's suggestion to use KillMode=process fixes this: http://www.redhat.com/archives/libvir-list/2012-March/msg00946.html However, it doesn't fix restart - it appears systemd is killing off all existing processes in the cgroup: http://cgit.freedesktop.org/systemd/systemd/tree/src/service.c?id=75c8e3cf#n2093
I'm experiencing the same behavior with: libvirt-0.9.11-1.fc17.x86_64 systemd-44-4.fc17.x86_64 systemd-44-4.fc17.i686 libcgroup-0.38-1.fc17.x86_64 That is, "systemctl stop libvirtd.service" does not kill other processes started by libvirtd, but a subsequent "systemctl start libvirtd.service does kill all the processes started by the previous libvirtd (i.e. all instances of dnsmasq, radvd, and (most importantly) qemu-kvm. Likewise, "systemctl restart libvirtd.service" also kills all processes started by the previous libvirtd. I'm guessing this isn't a problem with libvirtd, but it certainly causes a major regression in libvirtd behavior, one that really *must* be eliminated before F17 is released (it would be very good to get it fixed before F17 beta, but I don't know if it qualifies as a blocker). Should this be reassigned to systemd?
I learned the hard way that this bug makes a libvirtd restart break networking in all guests: http://bugzilla.redhat.com/802475#c29
I've done more tests and confirmed this is a change in systemd behaviour between Fedora 16 and 17 On Fedora 16 # virsh start vm1 # virsh list Id Name State ---------------------------------------------------- 1 vm1 running # systemctl status libvirtd.service libvirtd.service - Virtualization daemon Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Tue, 10 Apr 2012 19:47:13 +0100; 1min 17s ago Main PID: 24351 (libvirtd) CGroup: name=systemd:/system/libvirtd.service ├ 24301 /usr/libexec/qemu-kvm -S -M pc-0.13 -enable-kvm -m 215 -smp 4,sockets=4,cores=1,threads=1 -name vm1 -uuid c7a3edbd-ed... └ 24351 /usr/sbin/libvirtd # systemctl stop libvirtd.service # pgrep qemu-kvm 24301 # systemctl start libvirtd.service # pgrep qemu-kvm 24301 # virsh list Id Name State ---------------------------------------------------- 1 vm1 running # systemctl status libvirtd.service libvirtd.service - Virtualization daemon Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Tue, 10 Apr 2012 19:50:16 +0100; 1s ago Main PID: 24655 (libvirtd) CGroup: name=systemd:/system/libvirtd.service ├ 24301 /usr/libexec/qemu-kvm -S -M pc-0.13 -enable-kvm -m 215 -smp 4,sockets=4,cores=1,threads=1 -name vm1 -uuid c7a3edbd-ed... └ 24655 /usr/sbin/libvirtd Notice libvirtd PID changed, but QEMU stayed the same. Repeat the same test scenario on Fedora 17 and the QEMU pid is killed off when 'systemctl start libvirtd.service' is run The libvirtd.service file is # cat /lib/systemd/system/libvirtd.service # NB we don't use socket activation. When libvirtd starts it will # spawn any virtual machines registered for autostart. We want this # to occur on every boot, regardless of whether any client connects # to a socket. Thus socket activation doesn't have any benefit [Unit] Description=Virtualization daemon After=syslog.target After=udev.target After=avahi.target After=dbus.target Before=libvirt-guests.service [Service] KillMode=process EnvironmentFile=-/etc/sysconfig/libvirtd ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS ExecReload=/bin/kill -HUP $MAINPID # Override the maximum number of opened files #LimitNOFILE=2048 [Install] WantedBy=multi-user.target Note how we explicitly use KillMode=process to tell systemd to leave all our VM processes alone when shutting down libvirtd.
Browsing systemd GIT logs this looks like a possible culprit commit 8f53a7b8ea9ba505f8fefe4df4aaa5a8aab1e2eb Author: Lennart Poettering <lennart> Date: Wed Jan 11 01:51:32 2012 +0100 service: brutally slaughter processes that are running in the cgroup when we enter START_PRE and START IIUC, it will slaughter all existing processes in the cgroup whenever starting a service..... bye bye VM processes :-( We really need systemd not todo this if we have KillMode=process
Proposing as a beta blocker, might just be an F17Blocker though
-1 blocker, beta or final. It's annoying, but it's a host side issue and can be fixed fine with an update. It doesn't actually prevent virt from working, only causes nastiness if you restart libvirtd. Guest side issues are more likely to merit blocker status, as they will affect live images forever (can't fix lives with an update). -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers
Is gnome boxes going to be part of a live image? If that's the case, then you have a host-side issue where restarting libvirtd in a live image host with boxes installed will kill all the VMs running under boxes; at which point, this beta criteria sounds reasonable: 14. The release must be able host virtual guest instances of the same release, using Fedora's current preferred virtualization technology
Agreeing with Adam here -1 to blocker beta or final...
> -1 blocker, beta or final. It's annoying, but it's a host side issue and can be > fixed fine with an update. It doesn't actually prevent virt from working, only > causes nastiness if you restart libvirtd. libvirtd is restarted in %post when doing an RPM update, which IMHO makes this a serious issue worthy of urgent attention. Users will come after us with flaming torches & pitchforks when we kill their VMs off while doing an RPM update post release.
+1 blocker - if running 'yum update' kills your VMs, then this is _not_ something that can be fixed by an update, but must be fixed prior to the release.
Agreed we really need to fix this before the release, to me too it's a blocker ! Daniel
> libvirtd is restarted in %post when doing an RPM update, which IMHO makes this > a serious issue worthy of urgent attention. Users will come after us with > flaming torches & pitchforks when we kill their VMs off while doing an RPM > update post release. This. Assuming that nobody will happen to restart libvirt during normal operations (which is a very big assumption, since it has been common knowledge for a very long time that restarting libvirt will have *no* effect on running guests and as a matter of fact this is part of standard troubleshooting behavior when guest networking doesn't function, for example), the bigger problem with guests won't happen on the front end when F17 is installed and first used, but at some random time in the future when an update is made to libvirt (likely as a part of a larger update, so users may not even notice that libvirt is being updated), but the result will be effectively crashing all running guests on the system. Does anyone really think it's acceptable to ship Fedora with a set of packages that *will* lead to guests being unceremoniously killed with *no warning* by (for example) a routine security update? Several libvirt developers just discussed this on a phone call, and we're all in agreement that this problem should qualify as a beta blocker.
I'm voting for +1 blocker, simply from the standpoint of "element of least surprise". This really should be fixed by final, even if folks aren't comfortable blocking the beta for a fix.
It would have been nice to have this information _two days ago_ when we could have done something about it, rather than twelve hours before the go/no-go meeting. If we take this as a Beta blocker now we slip the release another week, putting us three weeks behind schedule. It really would have been nice to hear about this at any time in the last damn three weeks, so we could have actually tried to fix it. I'm still on the fence about the issue, frankly. No-one should be running any important VMs using a Beta release as the host (says the guy whose web server is currently running on an F17 host...). And you _can_ run VMs on an F17 host, still. They might explode when you do an update, but they run. I'm definitely +1 final blocker, people should be able to install a final release and safely deploy critical VMs on it. But I'm not sure about Beta blocker. -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers
I don't think this needs to be a Beta blocker - AFAIK, there's never any guarantee that upgrades from Beta -> GA are free from surprises. Final blocker is fine.
Eric: Boxes is not in the Beta images.
What is the real difference between blocking for this and releasing it as a 0day update? If someone is already running an installed version of F17, they'll hit this whether we block for it or not. The only way that someone would hit this if they aren't going to hit it already is if they install libvirt as part of their installation and start relying on it before updating the system post-install. My assumption is that users of a pre-release are more likely to do one of the following: - Update after install and before doing much with the installed system - Install libvirt post-install (which would grab the pkg from updates, not the install repo) - Use updates-testing as a source repo during installation If I'm understanding the issue correctly, any of those three scenarios would avoid hitting this bug. Unless I'm way off on my understanding or my assumptions, I'm -1 beta blocker on this. The risk of causing issues does not seem to increase much if we release beta with the old libvirt. Final blocker, maybe but we can cross that bridge if/when we get there
I'm with Adam on +1 final -- Beta I am having a hard time seeing (pitchforks and torches regardless, since we apparently shipped Alpha this way and save for this bug, haven't seen a lot of pitchforks and torches).
tflink: you are somewhat off, because the bug appears to be in systemd not libvirt. Though if we get a 0-day fix out for systemd, it should certainly mitigate the issue. -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers
(In reply to comment #20) > tflink: you are somewhat off, because the bug appears to be in systemd not > libvirt. Though if we get a 0-day fix out for systemd, it should certainly > mitigate the issue. Well, that gets rid of one of my scenarios for not hitting this. I think that I'm still -1 beta blocker, +1 beta NTH and +1 final blocker. This is a conditional violation of the release criteria (if you're running VMs when you do a system update on the host) and I think that it's enough of a corner/unwise case for a pre-release to justify passing on it as a blocker for beta.
I think I have a straightforward fix for this in systemd, but I will need people to test it. Doing a scratch build now.
Scratch Build: http://koji.fedoraproject.org/koji/taskinfo?taskID=3980250 If this resolves the issue, I'll take it to the systemd upstream to see if they will take it.
im -1 on beta blocker here, but i do think its a final blocker
I just tested spot's scratch build of systemd and it definitely fixes the problem - dnsmasq and qemu-kvm processes now survive a restart of libvirtd.service. So, along with all the other + and - karma here, a +1 for spot's patch :-)
I spoke to Kay about this patch, and he has a different approach to solving this bug. <kay> might get fixed tonight or tomorrow <kay> depends how it will end up, but we have an idea <kay> end up and how much work it will be, i meant <kay> we must enforce that kind of killing, always. current plan is to put all ExecPre/Post in a sub-cgroup and kill all them, and only them, when leaving the Pre/Post transaction <kay> Pre and Post have been misused to start things, we can not track anymore. we kind of must prevent that <kay> it will get out of control otherwise <kay> and we promised to provided babysitting data :) <kay> but we want to be strict here, because we've seen crazy service files we do not want to support in the end Given kay's timetable, I think this should be resolved well before final, so I'm fine with this being a final blocker and not a beta blocker.
we seem to have a reasonable consensus on -1 beta blocker here - -1 votes from me and Tim (QA), Dennis (releng), and Daniel Berrange (virt devel). so removing F17Beta. We seem to be solidly +1 final blocker, so marking as acceptedblocker for final. -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers
The fix is in git now: http://cgit.freedesktop.org/systemd/systemd/commit/?id=ecedd90fcdf647f9a7b56b4934b65e30b2979b04
Can we get that fix backported to Fedora 17, or will F17 be rebased to a new systemd release before GA ? We're ready to test the behaviour with libvirt as soon as new Fedora packages are available.
There will be a backport.
systemd-44-6.fc17 has been submitted as an update for Fedora 17. https://admin.fedoraproject.org/updates/systemd-44-6.fc17
Package systemd-44-6.fc17: * should fix your issue, * was pushed to the Fedora 17 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing systemd-44-6.fc17' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2012-6456/systemd-44-6.fc17 then log in and leave karma (feedback).
systemd-44-6.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report.
I had to revert the fix in 44-8.fc17 due to bug 816842. But I also disabled the killing of leftover processes when entering START_PRE or START states, so it should be fine. Please test if this bug is still fixed in systemd-44-8.fc17.
Seems to be fine, running VMs keep running, with 44-8. -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers