Created attachment 1476727 [details] complete stacktrace Description of problem: Unfortunately, our images have sometimes hundreds and always tens of Gigabytes. When I try to export such an image I get a coredump of docker-current. Version-Release number of selected component (if applicable): docker-1.13.1-61.git9cb56fd.fc28.x86_64 How reproducible: Always Steps to Reproduce: 1. docker run -it --name big fedora /bin/bash 2. dd if=/dev/urandom of=/var/tmp/bigfile bs=4096 count=$((4*1024*1024)) status=prorgress 3. docker export big | dd of=/var/tmp/backup.tar.gz status=progress Actual results: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 0+0 records in 0+0 records out 0 bytes copied, 107.724 s, 0.0 kB/s Expected results: 16G bytes copied ... Additional info: Few lines from logs (the complete backtrace has ~1000 lines) dockerd-current[3399]: fatal error: runtime: out of memory dockerd-current[3399]: runtime stack: dockerd-current[3399]: runtime.throw(0x18b563e, 0x16) dockerd-current[3399]: runtime.sysMap(0xc7f2630000, 0x197ad0000, 0x0, 0x2511678) dockerd-current[3399]: runtime.(*mheap).sysAlloc(0x24f6840, 0x197ad0000, 0x0) dockerd-current[3399]: runtime.(*mheap).grow(0x24f6840, 0xcbd67, 0x0)
Any chance you could try this out with podman to see if it works with them?
Well, if you tell me how to make podman working in parallel with docker, then I have no problem run those 3 commands on my machine. At the moment, I have several docker images but `podman images` returns nothing. By the way, I am getting the same results on RHEL-7.5 : docker-1.13.1-63.git94f4240.el7.x86_64
Podman does not use the Docker database. You would have to pull the images into podman. Theoretically you can pull images directly out of the docker-daemon. # docker images | grep centos docker.io/centos 7 49f7960eb7e4 2 months ago 200 MB docker.io/centos/ruby-22-centos7 latest e42d0dccf073 2 months ago 566 MB # podman pull docker-daemon:docker.io/centos:7 Getting image source signatures Copying blob sha256:bcc97fbfc9e1a709f0eb78c1da59caeb65f43dc32cd5deeb12b8c1784e5b8237 198.59 MB / 198.59 MB [====================================================] 1s Copying config sha256:49f7960eb7e4cb46f1a02c1f8174c6fac07ebf1eb6d8deffbcb5c695f1c9edd5 2.15 KB / 2.15 KB [========================================================] 0s Writing manifest to image destination Storing signatures 49f7960eb7e4cb46f1a02c1f8174c6fac07ebf1eb6d8deffbcb5c695f1c9edd5 #podman images | grep centos docker.io/library/centos 7 49f7960eb7e4 2 months ago 208MB
I installed podman, imported an image from docker-deamon, ran my test and podman successfully export the container. How can I use podman to export a running docker container (on direct-lvm)? Can I update the configuration in /etc/containers/ to use stuff created by docker? How comes I could create a file of 16GB when podman uses overlay where image size should not exceed 10GB?
Sorry podman can not use docker containers only import its images. overlay does not have size limits unless you turn on quota. Devicemapper sets a devault max size for its devices. Would it be possible to convert your work load over from Docker to Podman.
Yes, but only if I save the container which is running right now. The tools I am working with these days cannot be automated and I spent few hours configuring the container content (alright, the tools can be automated but we don't know how). Would it be possible to fix the crash first? I need export because I have no disk space for commit. I am happy to migrate from Docker to Podman because I hope that Podman is more verbose (i.e. reports progress for commit or export) and detects full disk (I ran docker commit and it didn't stop upon consuming entire LVM).
Vivek is there a way to expand the default volume size in devmapper for Docker?
You might be able to do docker export containerid | podman import -f - I am a little shaky on the syntax.
docker backtrace suggests out of memory. How much memory does this machine have? I suspect if you make more memory available, it will work. Question of optimizing docker will remain though. I don't know if it is due ot docker consuming too much of memory or it is because of too little memory available on the box.
> Vivek is there a way to expand the default volume size in devmapper for Docker? I could add more space by adding a new volume group and expanding the docker pool lvm, but I have no physical device I can attach to that machine ... > docker export containerid | podman import -f - The problems is that "docker export" does not work - that's the subject of this bug report. > I don't know if it is due ot docker consuming too much of memory or it is because of too little memory available on the box. $ free -h total used free shared buff/cache available Mem: 15G 8.0G 3.7G 950M 3.7G 6.2G Swap: 7.8G 0B 7.8G What about to teach docker to read layers by small chunks instead of pulling it to memory completely? We have 200GB images, I hope I don't need to buy more RAM to be able to export those images :) Anyways, I was able to export big containers with older versions of Docker.
can you test this out by removing all authorization plugins? basically: 1. systemctl edit --full docker.service 2. remove this line (remove the backslash as well): "--authorization-plugin=rhel-push-plugin \" with that, it shouldn't panic anymore and it should work. Please test it out and report back if you can :) I'm working on a fix
https://github.com/projectatomic/docker/commit/c81855c768d7070e9e127a15d7811bb02efc271b
I can confirm that removing the option '--authorization-plugin=rhel-push-plugin' from dockerd-current command line arguments fixes the crash for me. Thank you very much indeed! Now, I can start migrating to podman.
(In reply to Jakub Filak from comment #13) > I can confirm that removing the option > '--authorization-plugin=rhel-push-plugin' from dockerd-current command line > arguments fixes the crash for me. > > Thank you very much indeed! > > Now, I can start migrating to podman. awesome, next docker release is going to have the fix even with authz plugins enabled, so if you need that, just wait for a docker update.
Confirming that the latest projectatomic/docker:docker-1.13.1-rhel branch fixes the problem. Thanks!
This message is a reminder that Fedora 28 is nearing its end of life. On 2019-May-28 Fedora will stop maintaining and issuing updates for Fedora 28. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '28'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 28 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 28 changed to end-of-life (EOL) status on 2019-05-28. Fedora 28 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.