Bug 1386674 - [3.2] installer fail to deploy docker-registry.
Summary: [3.2] installer fail to deploy docker-registry.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Scott Dodson
QA Contact: Johnny Liu
URL:
Whiteboard:
: 1387212 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-19 12:26 UTC by Johnny Liu
Modified: 2016-10-21 19:55 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-10-21 19:29:56 UTC
Target Upstream Version:
sdodson: needinfo-


Attachments (Terms of Use)

Description Johnny Liu 2016-10-19 12:26:53 UTC
Description of problem:
After installation is finished, found docker-registry does not start.
# oc get po
NAME                       READY     STATUS             RESTARTS   AGE
docker-registry-3-deploy   1/1       Running            0          20s
docker-registry-3-k7dvt    0/1       CrashLoopBackOff   1          15s


# oc describe po docker-registry-3-k7dvt
...
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath			Type		Reason		Message
  ---------	--------	-----	----					-------------			--------	------		-------
  25s		25s		1	{default-scheduler }							Normal		Scheduled	Successfully assigned docker-registry-3-k7dvt to ip-172-18-7-200.ec2.internal
  23s		23s		1	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Normal		Started		Started container with docker id ae180146bb55
  23s		23s		1	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Normal		Created		Created container with docker id ae180146bb55
  22s		22s		1	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Normal		Started		Started container with docker id a5aec089bf48
  22s		22s		1	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Normal		Created		Created container with docker id a5aec089bf48
  21s		17s		3	{kubelet ip-172-18-7-200.ec2.internal}					Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "registry" with CrashLoopBackOff: "Back-off 10s restarting failed container=registry pod=docker-registry-3-k7dvt_default(be58637a-95db-11e6-8ddd-0e12f384e872)"

  24s	4s	3	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Normal	Pulled		Container image "registry.xxxx/openshift3/ose-docker-registry:v3.2.1.17" already present on machine
  3s	3s	1	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Normal	Created		Created container with docker id 2e918bfec4d1
  3s	3s	1	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Normal	Started		Started container with docker id 2e918bfec4d1
  21s	1s	5	{kubelet ip-172-18-7-200.ec2.internal}	spec.containers{registry}	Warning	BackOff		Back-off restarting failed docker container
  2s	1s	2	{kubelet ip-172-18-7-200.ec2.internal}					Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "registry" with CrashLoopBackOff: "Back-off 20s restarting failed container=registry pod=docker-registry-3-k7dvt_default(be58637a-95db-11e6-8ddd-0e12f384e872)"


# oc logs docker-registry-3-k7dvt
time="2016-10-19T05:10:23.373226018-04:00" level=info msg="version=v2.1.0+unknown" 
panic: unable to configure storage middleware (openshift): no storage middleware registered with name: openshift

goroutine 1 [running]:
github.com/docker/distribution/registry/handlers.NewApp(0x7f5af0c8fe58, 0x225a720, 0xc2080eec80, 0x7f5af0c8fe58)
	/builddir/build/BUILD/atomic-openshift-git-0.6d01b60/_thirdpartyhacks/src/github.com/docker/distribution/registry/handlers/app.go:134 +0xd68
github.com/openshift/origin/pkg/cmd/dockerregistry.Execute(0x7f5af0c82818, 0xc20802a788)
	/builddir/build/BUILD/atomic-openshift-git-0.6d01b60/_build/src/github.com/openshift/origin/pkg/cmd/dockerregistry/dockerregistry.go:60 +0x4d0
main.main()
	/builddir/build/BUILD/atomic-openshift-git-0.6d01b60/_build/src/github.com/openshift/origin/cmd/dockerregistry/main.go:51 +0x3ea

goroutine 5 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0x225b180)
	/builddir/build/BUILD/atomic-openshift-git-0.6d01b60/_thirdpartyhacks/src/github.com/golang/glog/glog.go:879 +0x78
created by github.com/golang/glog.init·1
	/builddir/build/BUILD/atomic-openshift-git-0.6d01b60/_thirdpartyhacks/src/github.com/golang/glog/glog.go:410 +0x2a7

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:2232 +0x1

goroutine 13 [syscall]:
os/signal.loop()
	/usr/lib/golang/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
	/usr/lib/golang/src/os/signal/signal_unix.go:27 +0x35

goroutine 20 [runnable]:
github.com/docker/distribution/registry/handlers.func·009()
	/builddir/build/BUILD/atomic-openshift-git-0.6d01b60/_thirdpartyhacks/src/github.com/docker/distribution/registry/handlers/app.go:964
created by github.com/docker/distribution/registry/handlers.startUploadPurger
	/builddir/build/BUILD/atomic-openshift-git-0.6d01b60/_thirdpartyhacks/src/github.com/docker/distribution/registry/handlers/app.go:975 +0x942




# cat config.yml 
version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: inmemory
  filesystem:
    rootdirectory: /registry
  delete:
    enabled: true
auth:
  openshift:
    realm: openshift
    
    # tokenrealm is a base URL to use for the token-granting registry endpoint.
    # If unspecified, the scheme and host for the token redirect are determined from the incoming request.
    # If specified, a scheme and host must be chosen that all registry clients can resolve and access:
    #
    # tokenrealm: https://example.com:5000
middleware:
  registry:
    - name: openshift
  repository:
    - name: openshift
      options:
        acceptschema2: false
        pullthrough: true
        enforcequota: false
        projectcachettl: 1m
        blobrepositorycachettl: 10m
  storage:
    - name: openshift



Go to docker-registry, get /config.yml as following:
sh-4.2# cat config.yml 
version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: inmemory
  filesystem:
    rootdirectory: /registry
  delete:
    enabled: true
auth:
  openshift:
    realm: openshift
    
    # tokenrealm is a base URL to use for the token-granting registry endpoint.
    # If unspecified, the scheme and host for the token redirect are determined from the incoming request.
    # If specified, a scheme and host must be chosen that all registry clients can resolve and access:
    #
    # tokenrealm: https://example.com:5000
middleware:
  registry:
    - name: openshift
  repository:
    - name: openshift
      options:
        acceptschema2: false
        pullthrough: true
        enforcequota: false
        projectcachettl: 1m
        blobrepositorycachettl: 10m
  storage:
    - name: openshift



Version-Release number of selected component (if applicable):
AtomicOpenShift-errata/3.2/2016-10-18.1
openshift3/ose-docker-registry:v3.2.1.17 (72cbf1ff926a)

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Johnny Liu 2016-10-20 16:03:23 UTC
Seem like this issue also happened with released ose-docker-registry images shipped in errata 25199.

@scott, I think we should rebuild this image to fix this bug and ship it ASAP before customer encounter it and complain about it.

Comment 2 Scott Dodson 2016-10-20 16:15:25 UTC
Looks like we got the 2.4.1 rebase config file in the image, looking at it.

Comment 8 Scott Dodson 2016-10-20 17:19:16 UTC
Workaround: edit the registry deployment config to set image tag to ':v3.2.1.15'. There are no fixes in v3.2.1.17 so downgrading should not be a problem.

Comment 12 Scott Dodson 2016-10-21 01:53:35 UTC
We have changed the v3.2.1.17 tag to a working image. If you're running 3.2.1.17 and your registry is failing to start please re-pull the image and ensure the image id is 1ef87a90d83a

Comment 14 Josep 'Pep' Turro Mauri 2016-10-21 10:53:07 UTC
*** Bug 1387212 has been marked as a duplicate of this bug. ***

Comment 15 Scott Dodson 2016-10-21 19:29:56 UTC
Closing this bug as the problematic image has been removed and multiple parties have verified the fix.


Note You need to log in before you can comment on or make changes to this bug.