Bug 1330294
Summary: | docker: Error response from daemon: Cannot start container <uuid> [9] System error: exit status 1. | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Chris Murphy <bugzilla> | ||||
Component: | docker | Assignee: | Daniel Walsh <dwalsh> | ||||
Status: | CLOSED EOL | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
Severity: | unspecified | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 24 | CC: | adimania, admiller, amurdaca, awilliam, bugzilla, dustymabe, dwalsh, ichavero, jcajka, jchaloup, lsm5, marianne, miminar, nalin, riek, sgallagh, vbatts | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | AcceptedFreezeException | ||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2017-08-08 14:18:42 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1230434 | ||||||
Attachments: |
|
Description
Chris Murphy
2016-04-25 19:38:46 UTC
[root@f24s ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 fedora lvm2 a-- 49.51g 19.42g [root@f24s ~]# vgs VG #PV #LV #SN Attr VSize VFree fedora 1 3 0 wz--n- 49.51g 19.42g [root@f24s ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert docker-pool fedora twi-a-t--- 12.98g 2.27 0.30 root fedora -wi-ao---- 15.00g swap fedora -wi-ao---- 2.00g [root@f24s ~]# dmsetup status fedora-swap: 0 4202496 linear fedora-root: 0 31457280 linear fedora-docker--pool_tdata: 0 27222016 linear fedora-docker--pool_tmeta: 0 106496 linear fedora-docker--pool: 0 27222016 thin-pool 11 40/13312 603/26584 - rw no_discard_passdown queue_if_no_space - [root@f24s ~]# docker info Containers: 4 Running: 0 Paused: 0 Stopped: 4 Images: 1 Server Version: 1.10.3 Storage Driver: devicemapper Pool Name: fedora-docker--pool Pool Blocksize: 524.3 kB Base Device Size: 10.74 GB Backing Filesystem: xfs Data file: Metadata file: Data Space Used: 316.1 MB Data Space Total: 13.94 GB Data Space Available: 13.62 GB Metadata Space Used: 163.8 kB Metadata Space Total: 54.53 MB Metadata Space Available: 54.36 MB Udev Sync Supported: true Deferred Removal Enabled: true Deferred Deletion Enabled: true Deferred Deleted Device Count: 0 Library Version: 1.02.122 (2016-04-09) Execution Driver: native-0.2 Logging Driver: journald Plugins: Volume: local Network: bridge null host Kernel Version: 4.5.2-301.fc24.x86_64 Operating System: Fedora 24 (Server Edition) OSType: linux Architecture: x86_64 Number of Docker Hooks: 2 CPUs: 3 Total Memory: 1.954 GiB Name: f24s.localdomain ID: EFTO:ZXE3:W3T6:Y5DO:5JQF:5O6V:X3L2:XLNV:QQ4G:3EOC:KWST:C7GZ Registries: docker.io (secure) [root@f24s ~]# Ran restorecon -rv / but the problem persists. Created attachment 1150623 [details]
journal_docker.txt
kernel messages when running the container: [ 959.965045] XFS (dm-5): Unmounting Filesystem [ 1252.895611] XFS (dm-5): Mounting V5 Filesystem [ 1252.926490] XFS (dm-5): Ending clean mount [ 1252.946118] XFS (dm-5): Unmounting Filesystem [ 1253.026196] XFS (dm-5): Mounting V5 Filesystem [ 1253.048121] XFS (dm-5): Ending clean mount [ 1253.057239] XFS (dm-5): Unmounting Filesystem [ 1253.130571] XFS (dm-5): Mounting V5 Filesystem [ 1253.149359] XFS (dm-5): Ending clean mount [ 1253.153602] device vethef47816 entered promiscuous mode [ 1253.153745] IPv6: ADDRCONF(NETDEV_UP): vethef47816: link is not ready [ 1253.197843] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ 1253.264051] docker0: port 1(vethef47816) entered disabled state [ 1253.264684] device vethef47816 left promiscuous mode [ 1253.264702] docker0: port 1(vethef47816) entered disabled state [ 1253.316093] XFS (dm-5): Unmounting Filesystem Do you have two versions of docker installed? docker-1.10.3-4.gitf8a9a2a.fc24.x86_64 docker-1.10.3-5.gitef2fa35.fc24.x86_64 Can you do a dnf reinstall docker I do not have two dockers installed, I tested those two versions independently with the same results. Does docker run --privileged ... Work? I have no problems on my f24 box. I just installed Fedora-Atomic-dvd-x86_64-24-20160424.n.0.iso in a libvirt VM, and I get the exact same problem with the 24.16 tree which has docker-1.10.3-4.gitf8a9a2a.fc24.x86_64. There isn't a newer tree to test 1.10.3-5. # docker run --privileged -it docker.io/fedora /bin/bash docker: Error response from daemon: Cannot start container 1324c442e34c8a2a129749399a76a31fa1c60cbc8f2aa81792d1ae72bc5f6f1d: [9] System error: exit status 1 Where can I get an image to play with. (Vagrant preferred) To be more clear the original description through comment 6 are based on a clean installation of Fedora-Server-netinst-x86_64-24-20160424.n.0.iso in a libvirt VM on Fedora 23. And comment 8 is Fedora-Atomic-dvd-x86_64-24-20160424.n.0.iso also in a libvirt VM on Fedora 23. So the thing in common is Fedora 23, maybe this is a libvirt problem? When I pass --unprivileged, "SELinux: mount invalid. Same superblock, different security settings" doesn't show up. [ 856.932982] XFS (dm-5): Mounting V5 Filesystem [ 856.960872] XFS (dm-5): Ending clean mount [ 856.982850] XFS (dm-5): Unmounting Filesystem [ 857.058163] XFS (dm-5): Mounting V5 Filesystem [ 857.085270] XFS (dm-5): Ending clean mount [ 857.094023] XFS (dm-5): Unmounting Filesystem [ 857.157419] XFS (dm-5): Mounting V5 Filesystem [ 857.181428] XFS (dm-5): Ending clean mount [ 857.187244] device vetha4aa5a1 entered promiscuous mode [ 857.187501] IPv6: ADDRCONF(NETDEV_UP): vetha4aa5a1: link is not ready [ 857.336169] docker0: port 1(vetha4aa5a1) entered disabled state [ 857.336890] device vetha4aa5a1 left promiscuous mode [ 857.336915] docker0: port 1(vetha4aa5a1) entered disabled state [ 857.392624] XFS (dm-5): Unmounting Filesystem In the F24 Cloud atomic VM: -bash-4.3# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:3f:7e:76 brd ff:ff:ff:ff:ff:ff inet 192.168.124.71/24 brd 192.168.124.255 scope global dynamic ens3 valid_lft 2199sec preferred_lft 2199sec inet6 fe80::8897:da7a:936f:db9b/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:aa:e3:dd:15 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever On the F23 Workstation host: [root@f23m images]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether c8:2a:14:02:88:99 brd ff:ff:ff:ff:ff:ff inet 10.0.0.4/24 brd 10.0.0.255 scope global dynamic enp2s0f0 valid_lft 595352sec preferred_lft 595352sec inet6 2601:282:702:b960:257f:20b8:4220:48ee/64 scope global temporary dynamic valid_lft 345603sec preferred_lft 76351sec inet6 2601:282:702:b960:ca2a:14ff:fe02:8899/64 scope global mngtmpaddr noprefixroute dynamic valid_lft 345603sec preferred_lft 345603sec inet6 fe80::ca2a:14ff:fe02:8899/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:83:90:be brd ff:ff:ff:ff:ff:ff inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:83:90:be brd ff:ff:ff:ff:ff:ff 9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:3f:7e:76 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe3f:7e76/64 scope link valid_lft forever preferred_lft forever Cloud_Base vagrant-libvirt https://kojipkgs.fedoraproject.org/compose/branched/Fedora-24-20160424.n.0/compose/CloudImages/x86_64/images/Fedora-Cloud-Base-Vagrant-24-20160424.n.0.x86_64.vagrant-libvirt.box But I haven't tested that image; the Cloud Base qcow2 doesn't have docker installed and I'm not sure if that's expected. Complete list: https://fedoraproject.org/wiki/Test_Results:Fedora_24_Branched_20160424.n.0_Installation?rd=Test_Results:Current_Installation_Test OK interesting, removing and reinstalling docker solves the problem. Looks like this is a dup of bug 1322909, except there it was 1.10.3-3 and I'm having this problem with -4. Once it happens, -4 has to be dnf removed, then dnf install puts in -5 from u-t which appears to work OK. We had better ship with the -5 version. It just reached Karma for pushing. Proposed as a Freeze Exception for 24-beta by Fedora user chrismurphy using the blocker tracking app because: Docker doesn't work with a clean installation, and in particular atomic host builds will get stuck since they can't do dnf remove and neither a rollback nor update fixes the problem. +1 FE. If this really is an issue with updates, we don't want an affected version on the frozen media. +1 FE, sure. +1 FE, I am not seeing it but it may be my setup. I had a report from someone else it was broken that's +3 (or +4 if we count Dan), setting acceptedFE. Fixed by shipping docker-1.10.3-5* Since this was voted as FE and docker-1.10.3-7.gita41254f.fc24 has enough karma for stable [1] can we get it pushed to stable? https://bodhi.fedoraproject.org/updates/FEDORA-2016-3223384190 I've rolled out another F24 update https://bodhi.fedoraproject.org/updates/FEDORA-2016-70da2b3c7c This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component. This message is a reminder that Fedora 24 is nearing its end of life. Approximately 2 (two) weeks from now Fedora will stop maintaining and issuing updates for Fedora 24. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '24'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 24 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. Fedora 24 changed to end-of-life (EOL) status on 2017-08-08. Fedora 24 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed. |