Red Hat Bugzilla – Bug 1274977
ostree+anaconda: unable to shutdown - dracut loop rm: cannot remove /lib/drauct/hooks/shutdown/30-dm-shutdown.sh: Read-only filesystem
Last modified: 2017-07-31 14:42:29 EDT
> So the Anaconda rpmostree.py payload handler is bind mounting things, but it
> is not destroying the binds after.
> The result is blivet panics when it cannot teardown the /mnt/sysimage mount
> due to active sub-mounts.
> Conceptually this is easily solved by a small enhancement to rpmostree.
> One can either use blivet in rpmostree, which will add thee bind mounts to
> the internal device tree, and will be torn-down by blivet on-exit.
> Or, rpmostree.py can simply umount the bind mount, which is what I've been
> Unfortunately it's not working, the bind mounts disappear, but somehow the
> systemd-tmpfiles psuedofilesystem mounts have apparently found their way
> into the blivet devicetree memory structure.
> The result is blivet triess to tear-down things that were already destroyed.
> Still investigating........
Now, I'm not sure this will actually fix the use of mock + livemedia-creator. If that's still broken, I suspect the simplest thing is to use `unshare -m` to create a separate mount namespace.
Jon, does the above PR help things for you?
(In reply to Colin Walters from comment #2)
> Jon, does the above PR help things for you?
Thanks Colin, I integrated this patch into my f22 setup but the issue of the sysroot mount persisting continues. Blivet goes to umount /mnt/sysimage but fails because the "sysroot" was still mounted.
(In reply to Jon Disnard from comment #3)
> (In reply to Colin Walters from comment #2)
> > Jon, does the above PR help things for you?
> Thanks Colin, I integrated this patch into my f22 setup but the issue of the
> sysroot mount persisting continues. Blivet goes to umount /mnt/sysimage but
> fails because the "sysroot" was still mounted.
There are now two patches in that PR, did you get both?
(In reply to Colin Walters from comment #4)
> (In reply to Jon Disnard from comment #3)
> There are now two patches in that PR, did you get both?
The install bails out, Anaconda does not even have the opportunity to prompt for clean shout down (E.G. 1. file bz, 2. shell , or 3. quit). It just dies which is different from before. After the program dies, here is what the mounts look like:
# findmnt -o target,source -R /mnt/sysimage
At this point I'm not sure I truest the anaconda logs to have flushed their outputs before python crashed. But I will attach the latest anaconda logs, and the terminal output.
Created attachment 1087600 [details]
Anaconda log files
Log files from recent failed attempt.
Created attachment 1087601 [details]
terminal output log
The terminal output seen
Hm, we're ordering payload.unsetup() after the blivet unmounts.
I'm still a bit uncertain as to why this is breaking with livemedia-creator but not in anaconda-in-a-VM. I guess I'd have to run lmc myself to debug this.
We might need to add a new call for payload to unmount before blivet in shutdown.
So I can see this going a few ways.
My favorite way would be fore blivet to gain the ability to 'umount -R' as a kwarg passed to the umount() routine. OR some such conception where basically we dis-mount recursively /mnt/sysimage. From my understanding that is what we are really doing anyways, except in a much more complicated way involving blivet keeping track of things and going bonkers when things happen outside it's state machine.
Well then, it can be persuasively argued that the payloads should just cleanup their mess so blivet doesn't explode when a payload doesn't cleanup. Or perhaps the payload should use the blivet interface via anaconda.storage object/instance, and let blivet destroy the mounts at devicetree.teardown time in the exit handler.
I was able to get this working by changing blivet to use recursion, because at the time of device.teardown I have the strong belief that is what is imperatively really happening anyways. It was simple, and just works, and arguably correct. But it does gloss the root-cause.
Anyways, for now. we can probably make the payloads code grow an routine to reverse prepareMountTargets(), say destroyMountTargets(). It would go in the general payload code, and then get overloaded in the rpmostree payload to do specific stuff. Not sure where to call such a routine in exit handler. Or, we forget that, and can maybe move the payload.unsetup higher up above the devicetree.teardown, but not sure how that impacts other payloads?
This message is a reminder that Fedora 23 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 23. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora 'version'
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.
Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 23 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
Thank you for reporting this bug and we are sorry it could not be fixed.