Bug 1309444 - After delete project, container is still alive
Summary: After delete project, container is still alive
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 3.1.0
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Derek Carr
QA Contact: Wei Sun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-17 20:06 UTC by Weibin Liang
Modified: 2016-02-24 15:58 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-24 15:58:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Weibin Liang 2016-02-17 20:06:01 UTC
Description of problem:
After delete project, container created in the project is alive

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 7.2 (Maipo)
oc v3.1.1.6
kubernetes v1.1.0-origin-1107-g4c8e6f4

How reproducible:

Easy to reproduce every time.

Steps to Reproduce:
1.In master, create project, pods and services
oc new-project project1
oc create -f ./hello-service1.json
oc create -f ./hello-service-pods1.json
[root@dhcp-41-152 Create_Router]# oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
hello-openshift1-1   1/1       Running   0          25s
hello-openshift1-2   1/1       Running   0          25s
hello-openshift1-3   1/1       Running   0          25s
[root@dhcp-41-152 Create_Router]# oc get services
NAME             CLUSTER_IP     EXTERNAL_IP   PORT(S)    SELECTOR                AGE
hello-service1   172.30.58.95   <none>        8080/TCP   name=hello-openshift1   32s
[root@dhcp-41-152 Create_Router]# 

2.In node, nsenter to container
[root@dhcp-41-117 ~]# docker ps
CONTAINER ID        IMAGE                              COMMAND              CREATED             STATUS              PORTS               NAMES
d9fb1764afb7        openshift/hello-openshift:v1.0.6   "/hello-openshift"   53 seconds ago      Up 50 seconds                           k8s_hello-openshift1.d10d4426_hello-openshift1-1_project1_d162f43a-d5ae-11e5-a360-52540006a8eb_9e3914bf
b039bb514cfd        openshift3/ose-pod:v3.1.1.6        "/pod"               56 seconds ago      Up 53 seconds                           k8s_POD.5983fd1a_hello-openshift1-1_project1_d162f43a-d5ae-11e5-a360-52540006a8eb_e2d98129
[root@dhcp-41-117 ~]# docker inspect d9fb1764afb7 | grep Pid
        "Pid": 111221,
        "PidMode": "",
[root@dhcp-41-117 ~]# nsenter -n -t 111221 sh
sh-4.2# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
267: eth0@if268: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 02:42:0a:01:00:82 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.1.0.130/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe01:82/64 scope link 
       valid_lft forever preferred_lft forever
sh-4.2# 

3.In master, delete project1
[root@dhcp-41-152 Create_Router]# oc delete project project1
project "project1" deleted
[root@dhcp-41-152 Create_Router]# oc get services
[root@dhcp-41-152 Create_Router]# oc get pods
[root@dhcp-41-152 Create_Router]# 

4.In node, container login console still alive:
sh-4.2# 
sh-4.2# 
sh-4.2# 
sh-4.2# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
sh-4.2# 
sh-4.2# 

Actual results:
container login console still alive

Expected results:
After delete project, the container should be deleted automatically and the login console to that container should be lost too.

Additional info:

Comment 1 Derek Carr 2016-02-19 16:56:52 UTC
Can you confirm that the container is eventually deleted once the kubelet observes the pod is purged?

Comment 2 Weibin Liang 2016-02-22 15:11:16 UTC
Just check, after 15 minutes, the console which log into that contain still alive, i still can run ip a command.

sh-4.2# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:c6:5b:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.18.41.225/23 brd 10.18.41.255 scope global dynamic eth0
       valid_lft 69946sec preferred_lft 69946sec
    inet6 2620:52:0:1228:5054:ff:fec6:5ba8/64 scope global noprefixroute dynamic 
       valid_lft 2591826sec preferred_lft 604626sec
    inet6 fe80::5054:ff:fec6:5ba8/64 scope link 
       valid_lft forever preferred_lft forever
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether e2:f2:f3:28:3c:04 brd ff:ff:ff:ff:ff:ff
5: br0: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN 
    link/ether ee:5b:1c:fc:d1:4c brd ff:ff:ff:ff:ff:ff
7: lbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 5a:03:a6:4d:6d:05 brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.1/24 scope global lbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::4432:b1ff:feea:a111/64 scope link 
       valid_lft forever preferred_lft forever
8: vovsbr@vlinuxbr: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP 
    link/ether ee:2a:05:8c:78:8e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec2a:5ff:fe8c:788e/64 scope link 
       valid_lft forever preferred_lft forever
9: vlinuxbr@vovsbr: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lbr0 state UP 
    link/ether 5a:03:a6:4d:6d:05 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5803:a6ff:fe4d:6d05/64 scope link 
       valid_lft forever preferred_lft forever
10: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether ae:37:24:3d:30:0c brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.1/24 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::ac37:24ff:fe3d:300c/64 scope link 
       valid_lft forever preferred_lft forever
sh-4.2#

Comment 3 Derek Carr 2016-02-24 15:58:15 UTC
This is not a bug.

The nsenter command will run a new program in the namespace of another process, but the life of your program is not tied to the container's primary process (i.e. PID 1).  There are other limitations with nsenter, most notably that your program is not launched in the same cgroup as the container so it has no resource constraints.

The proper way to enter a container is using docker exec, or its remote wrappers in kubectl exec or oc.  The command started using that option only runs while the container's primary process (PID 1) is running.

In the scenario described, the pod is deleted.  If you docker ps on the host, you can see the container is also deleted, but since the program you launched via nsenter is not tied to PID 1 of the container, it is still running as expected.  If your scenario had used docker exec, or its kubectl or oc equivalents, your program would get killed as expected when the pod was deleted.


Note You need to log in before you can comment on or make changes to this bug.