Description of problem: After space on hdd exhausted, podman cannot clean up images anymore. Version-Release number of selected component (if applicable): podman version 0.10.1.3 How reproducible: Steps to Reproduce: 1. podman build ... 2. make sure space is exhausted in point #1 3. pdman rmi imae1 image2 image3 Actual results: > $ sudo podman rmi 501848910ca8 4540553024d0 043e30f6515d 31ca89666ade ec7db351a463 0881c9c277aa > [sudo] password for avalon: > A container associated with containers/storage, i.e. via Buildah, CRI-O, etc., may be associated with this image: 501848910ca8 > A container associated with containers/storage, i.e. via Buildah, CRI-O, etc., may be associated with this image: 4540553024d0 > image is in use by a container > A container associated with containers/storage, i.e. via Buildah, CRI-O, etc., may be associated with this image: 043e30f6515d > image is in use by a container > A container associated with containers/storage, i.e. via Buildah, CRI-O, etc., may be associated with this image: 31ca89666ade > image is in use by a container > A container associated with containers/storage, i.e. via Buildah, CRI-O, etc., may be associated with this image: ec7db351a463 > image is in use by a container > A container associated with containers/storage, i.e. via Buildah, CRI-O, etc., may be associated with this image: 0881c9c277aa > image is in use by a container > image is in use by a container > $ sudo podman container ls -a <empty output from ls> Expected results: images are removed
I am not able to reproduce this error with upstream master. Would you be willing to build the upstream master and see if you can replicate it? If you are using overlay, there is a bug in c/storage which I have submitted a fix for -> https://github.com/containers/storage/pull/258
Do you happen to have a build that I can install locally?
you could try from https://copr.fedorainfracloud.org/coprs/baude/Upstream_CRIO_Family/ ?
Thank you, Brent. Small issue with the repos. On fedora 29 the official package is version is > 1:0.12.1.2-1.git9551f6b.fc29 While in your repo I see > 0.12.2-1546546584.git9ffd4806.fc29 So it doesn't want to install OOB. I think if you bump it to 1:... or 2:... in repo it will work better. Anyway, I installed by specifying version. My Dockerfile is: > FROM example.com/aosqe/nextgenflex > RUN dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000 Build output: > $ BUILDAH_LAYERS=false sudo podman build -f Dockerfile -t test-image --layers=false > STEP 1: FROM example.com/aosqe/nextgenflex > STEP 2: RUN dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000 > ERRO[0026] read container terminal output: input/output error: input/output error > dd: error writing '/home/jenkins/file_too_big': No space left on device > 11688+0 records in > 11687+0 records out > 12255678464 bytes (12 GB) copied, 26.6402 s, 460 MB/s > ERRO[0028] error unmounting container: error unmounting build container "8ce62c5e29a828ef7451058bba907c87cb7d944fe12cae63cca838702e5a4948": write /var/lib/containers/storage/overlay-layers/.tmp-layers.json065046180: no space left on device > error building at step {Env:[OPENSHIFT_BUILD_NAME=cucushift-oc40-3 OPENSHIFT_BUILD_NAMESPACE=image-build OPENSHIFT_BUILD_COMMIT=40ef39df701547e4549f40e64fb6c9bcb326a9ed PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin container=oci HOME=/home/jenkins] Command:run Args:[dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000] Flags:[] Attrs:map[] Message:RUN dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000 Original:RUN dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000}: error while running runtime: exit status 1 Result: > $ sudo podman images > Could not get runtime: mkdir /var/lib/containers/storage/overlay/compat651067926: no space left on device After I remove some unrelated files: > $ sudo podman images > REPOSITORY TAG IMAGE ID CREATED SIZE > example.com/aosqe/nextgenflex latest 4628e5499724 25 hours ago 3.81 GB > example.com/aosqe/nextgenflex 20190103 4628e5499724 25 hours ago 3.81 GB > example.com/aosqe/nextgenflex 20181221 8797b76a65c8 2 weeks ago 3.81 GB > example.com/aosqe/cucushift oc40 7cb047d34ec1 2 weeks ago 2.6 GB > $ sudo podman ps -a > <nothing> > $ du -sh ~/.local/share/containers/ > 4.0K ~/.local/share/containers/ > $ sudo du -sh /var/lib/containers/storage > 17G /var/lib/containers/storage > # du -sh * > 136K libpod > 4.0K mounts > 17G overlay > 156K overlay-containers > 88K overlay-images > 7.2M overlay-layers > 4.0K storage.lock > 4.0K tmp Space is not recoverable through `podman`, I assume the easiest way to reclaim it back is by `rm -rf /var/lib/containers`. I think current behaviour is even more frustrating because there is no indication running the `podman` command that some objects can be removed. So if machine gets out of space, user has no easy way to figure out there are container/image objects that can be clean-ed up. Now trying clean-up: > # rm -rf containers > rm: cannot remove 'containers/storage/overlay': Device or resource busy > # podman images > <nothing> > # rm -rf containers > <now dir gone without errors> The above is an interesting observation that running `podman` again somehow released lock over overlay. In summary, I think it is important to have a good automatic clean-up routine in case of out of space condition. In my experience it is easy to hit such situation and presently there is no user-friendly way to recover.
When i try to reproduce this, I see the following: [fedora@localhost libpod]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 983M 0 983M 0% /dev tmpfs 997M 0 997M 0% /dev/shm tmpfs 997M 500K 996M 1% /run tmpfs 997M 0 997M 0% /sys/fs/cgroup /dev/sda1 3.9G 2.2G 1.5G 60% / tmpfs 200M 4.0K 200M 1% /run/user/1000 [fedora@localhost libpod]$ BUILDAH_LAYERS=false bin/podman build --layers=false -f /foo/Dockerfile -t test-image . STEP 1: FROM alpine STEP 2: RUN dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000 dd: writing '/root/file_too_big': No space left on device 1507+0 records in 1505+1 records out error building at step {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:run Args:[dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000] Flags:[] Attrs:map[] Message:RUN dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000 Original:RUN dd if=/dev/zero of=$HOME/file_too_big bs=1M count=1500000}: error while running runtime: exit status 1 [fedora@localhost libpod]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 983M 0 983M 0% /dev tmpfs 997M 0 997M 0% /dev/shm tmpfs 997M 500K 996M 1% /run tmpfs 997M 0 997M 0% /sys/fs/cgroup /dev/sda1 3.9G 2.2G 1.5G 60% / tmpfs 200M 4.0K 200M 1% /run/user/1000 when running as you describe, it seems the space is still available?
Well, that's rather strange. Are you running Fedora 29? Maybe some other libraries are at fault?
Created attachment 1518536 [details] podman out of disk space I still reproduce, see log for podman version and logs. My fedora VM was updated just before I ran the test. Very strange.
can you create the same log but use podman --log-level=debug ? maybe that shows something that will help us explain things.
Created attachment 1519149 [details] podman out of space debug Didn't have time yesterday. Please find attached.
podman-1.1.0-1.git006206a.fc28 has been submitted as an update to Fedora 28. https://bodhi.fedoraproject.org/updates/FEDORA-2019-2334f59273
podman-1.1.0-1.git006206a.fc29 has been submitted as an update to Fedora 29. https://bodhi.fedoraproject.org/updates/FEDORA-2019-ead0cd452a
podman-1.1.0-1.git006206a.fc28 has been pushed to the Fedora 28 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-2334f59273
podman-1.1.0-1.git006206a.fc29 has been pushed to the Fedora 29 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-ead0cd452a
podman-1.1.2-1.git0ad9b6b.fc28 has been submitted as an update to Fedora 28. https://bodhi.fedoraproject.org/updates/FEDORA-2019-d244a0fe3e
podman-1.1.2-1.git0ad9b6b.fc29 has been submitted as an update to Fedora 29. https://bodhi.fedoraproject.org/updates/FEDORA-2019-5730099f0b
podman-1.1.2-1.git0ad9b6b.fc28 has been pushed to the Fedora 28 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-d244a0fe3e
podman-1.1.2-1.git0ad9b6b.fc29 has been pushed to the Fedora 29 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-5730099f0b
podman-1.1.2-1.git0ad9b6b.fc29 has been pushed to the Fedora 29 stable repository. If problems still persist, please make note of it in this bug report.
podman-1.1.2-1.git0ad9b6b.fc28 has been pushed to the Fedora 28 stable repository. If problems still persist, please make note of it in this bug report.