Bug 1700294 - New created image-registry keeps in ContainerCreating state for no available pvc
Summary: New created image-registry keeps in ContainerCreating state for no available pvc
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: Alexey Gladkov
QA Contact: Wenjing Zheng
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-16 09:25 UTC by Wenjing Zheng
Modified: 2019-06-04 10:47 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: ReadWriteOnce access mode was enabled for 1 replica. Consequence: When the user increased the number of replicas, the deployment could not use PVC with ReadWriteOnce access mode. Fix: Always require ReadWriteMany access mode. Result: Deployment can use PVC with any number of replicas.
Clone Of:
Environment:
Last Closed: 2019-06-04 10:47:37 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:47:43 UTC

Description Wenjing Zheng 2019-04-16 09:25:35 UTC
Description of problem:
When registry storage backend is using pvc, make some changes in image-registry, the new created registry pods cannot be running for no available pvc with below errors:
Events:
  Type     Reason              Age    From                     Message
  ----     ------              ----   ----                     -------
  Normal   Scheduled           3m15s  default-scheduler        Successfully assigned openshift-image-registry/image-registry-f6d697586-46vpv to compute-0
  Warning  FailedAttachVolume  3m15s  attachdetach-controller  Multi-Attach error for volume "pvc-d9e871b6-601a-11e9-861f-0050569bf97a" Volume is already used by pod(s) image-registry-5bdc5cdb79-hj62t
  Warning  FailedMount         72s    kubelet, compute-0       Unable to mount volumes for pod "image-registry-f6d697586-46vpv_openshift-image-registry(11020e55-6028-11e9-8891-0050569b0d3c)": timeout expired waiting for volumes to attach or mount for pod "openshift-image-registry"/"image-registry-f6d697586-46vpv". list of unmounted volumes=[registry-storage]. list of unattached volumes=[registry-storage registry-tls registry-certificates registry-token-rj8q7]


Version-Release number of selected component (if applicable):
4.0.0-0.nightly-2019-04-10-182914

How reproducible:
Always

Steps to Reproduce:
1.Config registry storage to PVC
2.Make some changes in image-registry config to make a new deployment triggered
3.Check new created pod

Actual results:
Pod is in ContainerCreating state:
$ oc get pods
NAME                                               READY   STATUS              RESTARTS   AGE
cluster-image-registry-operator-7d8ddc5d64-qz2t6   1/1     Running             0          27h
image-registry-5bdc5cdb79-hj62t                    1/1     Running             0          83m
image-registry-f6d697586-46vpv                     0/1     ContainerCreating   0          5m31s


Expected results:
Pod should be running

Additional info:

Comment 3 Wenjing Zheng 2019-04-23 07:28:14 UTC
Below validation will appear if create RWO pvc to registry config:
E0423 07:13:11.731723       1 controller.go:235] unable to sync: unable to sync storage configuration: PVC registry-pvc does not contain the necessary access mode (ReadWriteMany), requeuing
I0423 07:13:12.392246       1 controller.go:193] object changed: *v1.Config, Name=cluster (status=true)

Version: 4.1.0-0.nightly-2019-04-22-005054

Comment 5 errata-xmlrpc 2019-06-04 10:47:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.