Bug 1552613 - Failed to start pods consuming Config Maps as volumes at OCP 3.4.1.44.38
Summary: Failed to start pods consuming Config Maps as volumes at OCP 3.4.1.44.38
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.4.z
Assignee: Jan Safranek
QA Contact: Wenqi He
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-07 12:44 UTC by hgomes
Modified: 2018-04-30 05:26 UTC (History)
8 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2018-04-30 05:26:47 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:1237 None None None 2018-04-30 05:26 UTC

Description hgomes 2018-03-07 12:44:12 UTC
[Environment]
OCP v3.4.1.44.38
Deployed on Openstack.


# Replicate the issue.
Create a ConfigMap and a DeploymentConfig using the attached template (oc new-app -f redis.yaml). This template is produced starting from the 3scale template (https://github.com/3scale/3scale-amp-openshift-templates/blob/master/amp/amp.yml)

# Expected result.
The pod is created with the ConfigMap mounted on path /etc/redis.conf.

# Actual result.
The pod fails with a RunContainerError. No issue on old nodes.

Using the oc command I can get this output:
$ oc get pod --show-all=false -o wide
NAME                     READY     STATUS              RESTARTS   AGE       IP            NODE
backend-redis-2-deploy   1/1       Running             0          6m        10.129.6.23   ocp-node-jsq3i4v7.infra.paas.mef.gov.it
backend-redis-2-i64ns    0/1       RunContainerError   0          6m        10.130.4.14   ocp-node-hi763t90.infra.paas.mef.gov.it

In the log file I get these error logs:

Feb 20 15:16:11   atomic-openshift-node: E0220 15:16:11.339588  130540 kubelet.go:1247] failed to mkdir:/var/lib/origin/openshift.local.volumes/pods/3b486110-1647-11e8-a059-fa163e79fd96/volumes/kubernetes.io~configmap/red
is-config/redis.conf
Feb 20 15:16:11   atomic-openshift-node: E0220 15:16:11.339615  130540 docker_manager.go:2265] container start failed: RunContainerError: GenerateRunContainerOptions: mkdir /var/lib/origin/openshift.local.volumes/pods/3b4
86110-1647-11e8-a059-fa163e79fd96/volumes/kubernetes.io~configmap/redis-config/redis.conf: not a directory
Feb 20 15:16:11   atomic-openshift-node: E0220 15:16:11.339671  130540 pod_workers.go:184] Error syncing pod 3b486110-1647-11e8-a059-fa163e79fd96, skipping: failed to "StartContainer" for "backend-redis" with RunContainer
Error: "GenerateRunContainerOptions: mkdir /var/lib/origin/openshift.local.volumes/pods/3b486110-1647-11e8-a059-fa163e79fd96/volumes/kubernetes.io~configmap/redis-config/redis.conf: not a directory"
Feb 20 15:16:11   atomic-openshift-node: I0220 15:16:11.339953  130540 request.go:544] Request Body: "{\"count\":18,\"lastTimestamp\":\"2018-02-20T14:16:11Z\"}"
Feb 20 15:16:11   atomic-openshift-node: I0220 15:16:11.340012  130540 round_trippers.go:299] curl -k -v -XPATCH  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "Content-Type: application/strategic-m
erge-patch+json" -H "User-Agent: openshift/v1.4.0+776c994 (linux/amd64) kubernetes/a9e9cf3" https://ocp-web.infra.paas.mef.gov.it:8443/api/v1/namespaces/test/events/backend-redis-2-i64ns.15150e07aae1b5fa
Feb 20 15:16:11   atomic-openshift-node: I0220 15:16:11.340160  130540 server.go:608] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"backend-redis-2-i64ns", UID:"3b486110-1647-11e8-a059-fa163e79fd96", APIVe
rsion:"v1", ResourceVersion:"17451146", FieldPath:""}): type: 'Warning' reason: 'FailedSync' Error syncing pod, skipping: failed to "StartContainer" for "backend-redis" with RunContainerError: "GenerateRunContainerOptions: mkdir /var/lib
/origin/openshift.local.volumes/pods/3b486110-1647-11e8-a059-fa163e79fd96/volumes/kubernetes.io~configmap/redis-config/redis.conf: not a directory"


Notes:
It works fine at OCP 3.4.1.44.17

Comment 2 Seth Jennings 2018-03-07 17:04:10 UTC
This is likely due to an interaction between the redis.conf using subPath
https://github.com/3scale/3scale-amp-openshift-templates/blob/master/amp/amp.yml#L333

and this commit which was backported to 3.4
https://github.com/openshift/origin/pull/13895

to fix this bz
https://bugzilla.redhat.com/show_bug.cgi?id=1445526

Paul, could you take a look?

Comment 3 Seth Jennings 2018-03-07 17:10:56 UTC
The template seems weird though.

          volumeMounts:
          - name: redis-config
            mountPath: /etc/redis.conf
            subPath: redis.conf
        volumes:
...
        - name: redis-config
          configMap:
            name: redis-config
            items:
            - key: redis.conf
              path: redis.conf

It could be an issue with the template that this fix just exposed.  It seems like a hack to mount a file into /etc without mounting over the top of /etc.

Comment 4 Seth Jennings 2018-03-07 21:13:02 UTC
I do agree that https://github.com/kubernetes/kubernetes/pull/45623 should fix

Comment 9 Jan Safranek 2018-03-16 14:42:06 UTC
3.4 PR: https://github.com/openshift/ose/pull/1138

Comment 13 Wenqi He 2018-04-17 03:36:17 UTC
Tested on below version:
openshift v3.4.1.44.53
kubernetes v1.4.0+776c994

From Jan's comment #8, Pod is running well
# oc get pods
testpod                     1/1       Running   0          8s
# oc describe pods
Name:			testpod
Namespace:		default
Security Policy:	anyuid
Node:			ip-172-18-9-99.ec2.internal/172
Start Time:		Mon, 16 Apr 2018 23:33:18 -0400
Labels:			<none>
Status:			Running
IP:			10.128.0.14
Controllers:		<none>
Containers:
  backend-redis:
    Container ID:	docker://f5c48290466c8966257cf13101304a03db3f337552c2a25ca50aa68a3fdbf60e
    Image:		busybox
    Image ID:		docker-pullable://docker.io/busybox@sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64
    Port:		
    Command:
      sh
      -c
      cat /etc/redis.conf; sleep 3600
    State:		Running
      Started:		Mon, 16 Apr 2018 23:33:23 -0400
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /etc/redis.conf from redis-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fqd3m (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  redis-config:
    Type:	ConfigMap (a volume populated by a ConfigMap)
    Name:	redis-config
  default-token-fqd3m:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-fqd3m
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath			Type		Reason	Message
  ---------	--------	-----	----					-------------			--------	------	-------
  16s		16s		1	{default-scheduler }							Normal		Scheduled	Successfully assigned testpod to ip-172-18-9-99.ec2.internal
  15s		15s		1	{kubelet ip-172-18-9-99.ec2.internal}	spec.containers{backend-redis}	Normal		Pullingpulling image "busybox"
  11s		11s		1	{kubelet ip-172-18-9-99.ec2.internal}	spec.containers{backend-redis}	Normal		Pulled	Successfully pulled image "busybox"
  11s		11s		1	{kubelet ip-172-18-9-99.ec2.internal}	spec.containers{backend-redis}	Normal		CreatedCreated container with docker id f5c48290466c; Security:[seccomp=unconfined]
  11s		11s		1	{kubelet ip-172-18-9-99.ec2.internal}	spec.containers{backend-redis}	Normal		StartedStarted container with docker id f5c48290466c

Comment 17 errata-xmlrpc 2018-04-30 05:26:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:1237


Note You need to log in before you can comment on or make changes to this bug.