Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1364218

Summary: containers from k8s are not registered properly
Product: Red Hat Enterprise Linux 7 Reporter: Qian Cai <qcai>
Component: oci-register-machineAssignee: Daniel Walsh <dwalsh>
Status: CLOSED CURRENTRELEASE QA Contact: Martin Jenner <mjenner>
Severity: high Docs Contact:
Priority: high    
Version: 7.2CC: dwalsh, mpatel
Target Milestone: rcKeywords: Extras
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-01-26 16:15:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Qian Cai 2016-08-04 18:07:04 UTC
Description of problem:
From the k8s master,
$ kubectl get pods
NAME        READY     STATUS    RESTARTS   AGE
glusterfs   1/1       Running   0          3h
systemd     1/1       Running   0          3h

From the k8s node where the actual containers are running,
# docker ps
CONTAINER ID        IMAGE                                COMMAND             CREATED             STATUS              PORTS               NAMES
6dccfff97e63        rhel7                                "/usr/sbin/init"    3 hours ago         Up 3 hours                              k8s_systemd.7c2be022_systemd_default_72e636e0-5a53-11e6-9506-fa163e07a2df_59e2d256
b12abb6f58a0        gcr.io/google_containers/pause:2.0   "/pause"            3 hours ago         Up 3 hours                              k8s_POD.6059dfa2_systemd_default_72e636e0-5a53-11e6-9506-fa163e07a2df_e8cc8ba6
70122ebd06ca        fedora/nginx                         "/usr/sbin/nginx"   3 hours ago         Up 3 hours                              k8s_glusterfs.6c8afde2_glusterfs_default_e37129dd-5a4f-11e6-9506-fa163e07a2df_812ff0c5
2dc1b9887f46        gcr.io/google_containers/pause:2.0   "/pause"            3 hours ago         Up 3 hours                              k8s_POD.6059dfa2_glusterfs_default_e37129dd-5a4f-11e6-9506-fa163e07a2df_0d2786c6

# machinectl 
MACHINE                          CLASS     SERVICE
2dc1b9887f4651272236c657669ebb19 container docker 
b12abb6f58a0981759393f98e62ed112 container docker

Both of the containers were registered are pause containers.
# machinectl status 2dc1b9887f4651272236c657669ebb19
2dc1b9887f4651272236c657669ebb19(32646331623938383766343635313237)
           Since: Thu 2016-08-04 14:29:54 UTC; 3h 35min ago
          Leader: 62649 (pause)
         Service: docker; class container
            Root: /var/mnt/overlay/devicemapper/mnt/d7a3b2298dbac708254ae80c66a3
         Address: 172.17.0.2
                  fe80::42:acff:fe11:2
            Unit: docker-2dc1b9887f4651272236c657669ebb1961c7e23401bfe6774fcfebd
                  └─62649 /pause

Version-Release number of selected component (if applicable):
oci-register-machine-0-1.7.git31bbcd2.el7.x86_64
atomic host 7.2.6

How reproducible:
always

Comment 2 Daniel Walsh 2016-08-20 08:29:01 UTC
We are dropping oci-register-machine from RHEL7 for now. There is a bug in the linux kernel that does not allow us to run docker in the host namespace.  oci-register-machine does not work when you run in this state, because systemd will not see the mount points inside of the container since it will not be in docker's namespace.

As far as this bug is concerned, oci-register-machine is only going to register the first container run all other containers are joining the initial container.  

Mrunal do you think the other containers even call into oci-register-machine?  This will also cause oci-systemd-hook to not run properly, since it will not setup the containers correctly.

Comment 3 Mrunal Patel 2016-09-19 16:34:07 UTC
The hooks should be called for each container. I suspect that the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1364237 isn't present in this setup.

Comment 4 Qian Cai 2017-01-26 15:03:57 UTC
Dan, do you think we should close this one since we are dropping oci-register-machine?

Comment 5 Daniel Walsh 2017-01-26 15:18:02 UTC
Actually I misspoke.  We were disabling the RHEL7 oci-register-machine by default, but we still want to support it.

I think the oci-register-machine should be working now.

Comment 6 Daniel Walsh 2017-01-26 15:18:48 UTC
I am going to mark this as fixed in the current release.  If you can check to see if it is still broken.

Comment 7 Qian Cai 2017-01-26 15:43:06 UTC
Yes, this is working now as version 7.3.2.