RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1389707 - Failed to delete specified docker container
Summary: Failed to delete specified docker container
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: atomic
Version: 7.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Lokesh Mandvekar
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1380848
TreeView+ depends on / blocked
 
Reported: 2016-10-28 09:38 UTC by Alex Jia
Modified: 2017-05-26 14:28 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-05-26 14:28:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1323 0 normal SHIPPED_LIVE atomic bug fix and enhancement update 2017-05-26 18:13:56 UTC

Description Alex Jia 2016-10-28 09:38:03 UTC
Description of problem:
As summary.

Version-Release number of selected component (if applicable):

[root@atomic-host-001 cloud-user]# rpm -q atomic
atomic-1.13.3-1.el7.x86_64
[root@atomic-host-001 cloud-user]# atomic host status
State: idle
Deployments:
● rhel-atomic-host:rhel-atomic-host/7/x86_64/standard
       Version: 7.3 (2016-10-26 14:24:09)
        Commit: 90c9735becfff1c55c8586ae0f2c904bc0928f042cd4d016e9e0e2edd16e5e97
        OSName: rhel-atomic-host
  GPGSignature: (unsigned)
      Unlocked: development

[root@atomic-host-001 cloud-user]# rpm -q atomic skopeo
atomic-1.13.3-1.el7.x86_64
skopeo-0.1.17-0.4.git550a480.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. docker run -itd busybox /bin/sh
2. atomic containers delete <container_id> --force
3.

Actual results:

[root@atomic-host-001 cloud-user]# docker run -itd busybox /bin/sh
bf253dc99a3ba92cc5a043f84ec34b0f4796aec843cabf622e7f1c729a10895b

[root@atomic-host-001 cloud-user]# atomic containers list
   CONTAINER ID IMAGE                COMMAND              CREATED          STATUS    RUNTIME   
   bf253dc99a3b busybox              /bin/sh              2016-10-28 09:36 running   Docker

[root@atomic-host-001 cloud-user]# atomic containers delete  bf253dc99a3b --force
Expected a string but found ['bf253dc99a3b'] (<type 'list'>) instead

Expected results:
fix it.

Additional info:

Comment 2 Alex Jia 2016-11-01 09:35:02 UTC
The same issue is on atomic-1.13.5-1.el7.x86_64 with skopeo-0.1.17-0.5.git1f655f3.el7.x86_64

Comment 3 Alex Jia 2016-11-22 08:53:15 UTC
The same issue is in atomic-1.13.8-1.el7.x86_64.

[root@atomic-00 cloud-user]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
0a092545bf79        busybox             "/bin/sh"           22 seconds ago      Up 20 seconds                           grave_yalow
[root@atomic-00 cloud-user]# atomic containers delete  0a092545bf79 --force
Expected a string but found ['0a092545bf79'] (<type 'list'>) instead
[root@atomic-00 cloud-user]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
0a092545bf79        busybox             "/bin/sh"           50 seconds ago      Up 48 seconds                           grave_yalow
[root@atomic-00 cloud-user]# atomic containers delete  grave_yalow --force
Expected a string but found ['grave_yalow'] (<type 'list'>) instead
[root@atomic-00 cloud-user]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
0a092545bf79        busybox             "/bin/sh"           About a minute ago   Up About a minute                       grave_yalow

Comment 4 Brent Baude 2017-02-27 16:08:09 UTC
I believe this is now fixed.  Alex agree?

Comment 5 Alex Jia 2017-02-28 14:59:28 UTC
(In reply to Brent Baude from comment #4)
> I believe this is now fixed.  Alex agree?

Yes, it works for me.

Comment 6 Brent Baude 2017-04-04 19:28:40 UTC
Lokesh, do we need to cite this in a release not or anything before closing?

Comment 8 Alex Jia 2017-04-28 00:26:00 UTC
Containers can be deleted successfully in my rhel7 system, but I got "Internal Server Error", please see the following details.

[root@hp-dl360g9-04 ~]# rpm -q atomic skopeo docker
atomic-1.17.1-1.gitf304570.el7.x86_64
skopeo-0.1.18-1.el7.x86_64
docker-1.12.6-18.git29d6f69.el7.x86_64

[root@hp-dl360g9-04 ~]# docker run --name cont1 -itd busybox /bin/sh 
7c29bac9d18203627cb9bda6f1643d26c095ff84b5951840e640de0af1ee7510
[root@hp-dl360g9-04 ~]# docker run --name cont2 -itd busybox /bin/sh 
405ecba2bfc6d43e5e0669967f7bb40b82fc382b4d60892c4173621b2ab76e0e
[root@hp-dl360g9-04 ~]# atomic containers list
   CONTAINER ID IMAGE                COMMAND              CREATED          STATE      BACKEND    RUNTIME   
   405ecba2bfc6 busybox              /bin/sh              2017-04-28 08:18 running    docker     docker    
   7c29bac9d182 busybox              /bin/sh              2017-04-28 08:18 running    docker     docker    
[root@hp-dl360g9-04 ~]# atomic containers delete -fa
Do you wish to delete the following images?

   ID           NAME                 IMAGE_NAME                STORAGE   
   405ecba2bfc6 cont2                busybox                   docker    
   7c29bac9d182 cont1                busybox                   docker    
   db03150831d2 prickly_yalow        a456debd3f4382e0dd951161c docker    
   1ae827db5c95 tiny_blackwell       6209dda3052bbb897a53945e6 docker    

Confirm (y/N) y
Failed to delete container 1ae827db5c959424b937674ca8bb2b4e419b6d8e94fdb9520f8f719db0bb3020: 500 Server Error: Internal Server Error ("{"message":"Driver devicemapper failed to remove root filesystem 1ae827db5c959424b937674ca8bb2b4e419b6d8e94fdb9520f8f719db0bb3020: failed to remove device c99a0ef0816ef62e14e2b57f0462ae8d530eb0675a658d9e7f4915ca0bcc0300:Device is Busy"}")


It's okay to delete each container separately. 

[root@hp-dl360g9-04 ~]# docker run --name cont1 -itd busybox /bin/sh
3a6306a251af0057c48e9a0470c3345a2798703dc780fd204f5d16e179d90731

[root@hp-dl360g9-04 ~]# docker run --name cont2 -itd busybox /bin/sh
713f146bdc9e09730a191153bd40dc3f1d70491fff99dc4fb2df1f86ec00fe02

[root@hp-dl360g9-04 ~]# atomic containers list
   CONTAINER ID IMAGE                COMMAND              CREATED          STATE      BACKEND    RUNTIME   
   713f146bdc9e busybox              /bin/sh              2017-04-28 08:22 running    docker     docker    
   3a6306a251af busybox              /bin/sh              2017-04-28 08:22 running    docker     docker    
[root@hp-dl360g9-04 ~]# atomic containers delete -f 713f146bdc9e
Do you wish to delete the following images?

   ID           NAME                 IMAGE_NAME                STORAGE   
   713f146bdc9e cont2                busybox                   docker    

Confirm (y/N) n
User aborted delete operation for ['713f146bdc9e']

[root@hp-dl360g9-04 ~]# atomic containers list
   CONTAINER ID IMAGE                COMMAND              CREATED          STATE      BACKEND    RUNTIME   
   713f146bdc9e busybox              /bin/sh              2017-04-28 08:22 running    docker     docker    
   3a6306a251af busybox              /bin/sh              2017-04-28 08:22 running    docker     docker    

[root@hp-dl360g9-04 ~]# atomic -y containers delete -f 713f146bdc9e
The following containers will be deleted.

   ID           NAME                 IMAGE_NAME                STORAGE   
   713f146bdc9e cont2                busybox                   docker 
   
[root@hp-dl360g9-04 ~]# atomic containers list
   CONTAINER ID IMAGE                COMMAND              CREATED          STATE      BACKEND    RUNTIME   
   3a6306a251af busybox              /bin/sh              2017-04-28 08:22 running    docker     docker    

[root@hp-dl360g9-04 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
3a6306a251af        busybox             "/bin/sh"           About a minute ago   Up About a minute                       cont1

[root@hp-dl360g9-04 ~]# atomic -y containers delete -f cont1
The following containers will be deleted.

   ID           NAME                 IMAGE_NAME                STORAGE   
   3a6306a251af cont1                busybox                   docker    

[root@hp-dl360g9-04 ~]# atomic containers list

Comment 9 Brent Baude 2017-04-28 14:07:41 UTC
@Alex,

There are a couple of issues here.  First, can we open this as a new bug because it does differ from the original report?

Secondly, I cannot reproduce this ... of note, the container that fails to delete isnt the busybox container but actually something else. Is it possible that image is some corrupt image or in a bad state?

If you can create a reproducer procedure, that would be super helpful.  I tried on RHEL and Fedora.

Comment 10 Alex Jia 2017-05-02 03:14:51 UTC
(In reply to Brent Baude from comment #9)

> If you can create a reproducer procedure, that would be super helpful.  I
> tried on RHEL and Fedora.

It's not always can be reproduced, I will open a new bug if I met the issue again, thanks.

Comment 11 Alex Jia 2017-05-17 02:15:54 UTC
It also works in atomic-1.17.2-2.git2760e30.el7.x86_64.

Comment 13 errata-xmlrpc 2017-05-26 14:28:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1323


Note You need to log in before you can comment on or make changes to this bug.