Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1428009

Summary: [3.5] Fix controller panic in creating pod event
Product: OpenShift Container Platform Reporter: Eric Paris <eparis>
Component: StorageAssignee: Bradley Childs <bchilds>
Status: CLOSED ERRATA QA Contact: Chao Yang <chaoyang>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.5.0CC: aos-bugs, aos-storage-staff, bchilds, eparis, hchen, hekumar, jhou, tdawson
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: 1427227 Environment:
Last Closed: 2017-04-12 19:14:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1427227    
Bug Blocks:    

Comment 1 Troy Dawson 2017-03-01 22:50:14 UTC
This has been merged into ocp and is in OCP v3.5.0.37 or newer.

Comment 3 Chao Yang 2017-03-02 04:08:29 UTC
Can see some error message recorded from controller manager like below when `oc describe pods mypod2`
[root@ip-172-18-0-196 ~]# oc describe pods mypod2
Name:			mypod2
Namespace:		079ey
Security Policy:	anyuid
Node:			ip-172-18-13-228.ec2.internal/172.18.13.228
Start Time:		Wed, 01 Mar 2017 21:54:38 -0500
Labels:			<none>
Status:			Pending
IP:			
Controllers:		<none>
Containers:
  dynamic:
    Container ID:	
    Image:		aosqe/hello-openshift
    Image ID:		
    Port:		80/TCP
    State:		Waiting
      Reason:		ContainerCreating
    Ready:		False
    Restart Count:	0
    Volume Mounts:
      /mnt/aws from dynamic (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f6q0z (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  dynamic:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	dynamic-pvc2-079ey
    ReadOnly:	false
  default-token-f6q0z:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-f6q0z
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From					SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----					-------------	--------	------		-------
  5m		5m		1	{default-scheduler }					Normal		Scheduled	Successfully assigned mypod2 to ip-172-18-13-228.ec2.internal
  5m		1m		10	{controller-manager }					Warning		FailedMount	Failed to attach volume "pvc-83be8025-fef3-11e6-91c9-0e6f67fe61f4" on node "ip-172-18-13-228.ec2.internal" with: Too many EBS volumes attached to node ip-172-18-13-228.ec2.internal.
  3m		1m		2	{kubelet ip-172-18-13-228.ec2.internal}			Warning		FailedMount	Unable to mount volumes for pod "mypod2_079ey(93a29c94-fef3-11e6-91c9-0e6f67fe61f4)": timeout expired waiting for volumes to attach/mount for pod "079ey"/"mypod2". list of unattached/unmounted volumes=[dynamic]
  3m		1m		2	{kubelet ip-172-18-13-228.ec2.internal}			Warning		FailedSync	Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "079ey"/"mypod2". list of unattached/unmounted volumes=[dynamic]

If I test as step in https://bugzilla.redhat.com/show_bug.cgi?id=1397693, it also could not find error message like "invalid memory address or nil pointer dereference"

Test version is openshift v3.5.0.37

Comment 5 errata-xmlrpc 2017-04-12 19:14:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0884