Bug 1312822 - When use the S3 as storage the integrated docker-registry pod can't run
When use the S3 as storage the integrated docker-registry pod can't run
Status: CLOSED NOTABUG
Product: OpenShift Origin
Classification: Red Hat
Component: Image Registry (Show other bugs)
3.x
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Michail Kargakis
Wei Sun
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-29 05:29 EST by zhou ying
Modified: 2016-03-16 11:50 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-16 11:50:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description zhou ying 2016-02-29 05:29:38 EST
Description of problem:
When use the S3 as storage, the integrated docker-registry pod can't run, the registry will deploy failed

Version-Release number of selected component (if applicable):
openshift v1.1.3-319-g160c20f
kubernetes v1.2.0-alpha.7-703-gbc4550d
etcd 2.2.5

How reproducible:
always


Steps to Reproduce:
1. Use the config file to create screts:
 `oc secrets new registry config.yml`
cat config.yml 
version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    layerinfo: inmemory
  s3:
    accesskey: XXXXXXXXX
    secretkey: YYYYYYYYYYYYYYYY 
    region: US
    bucket: s3://openshift-qe-registry-testing-bucket1/
    encrypt: true
    secure: true
    v4auth: true
    rootdirectory: /registry
auth:
  openshift:
    realm: openshift
middleware:
  repository:
    - name: openshift

2. Update the integrated docker-registry's dc , to use the S3 as storage:
  `oc env dc/docker-registry  REGISTRY_CONFIGURATION_PATH=/config/config.yml`
  `oc volume dc/docker-registry --add --name=config -m /config --type=secret --secret-name=registry`

3. Wait for the docker-registry to deploy

Actual results:
3. The registry will deploy failed, because the docker-registry's pod can't run:
[root@ip-172-18-3-168 amd64]# oc get pods --config=openshift.local.config/master/admin.kubeconfig
NAME                       READY     STATUS             RESTARTS   AGE
docker-registry-1-0jn48    1/1       Running            0          51m
docker-registry-3-deploy   0/1       Error              0          37m
docker-registry-4-deploy   1/1       Running            0          6m
docker-registry-4-tcb4r    0/1       CrashLoopBackOff   5          5m


[root@ip-172-18-3-168 amd64]# oc describe pod docker-registry-4-tcb4r --config=openshift.local.config/master/admin.kubeconfig
Name:        docker-registry-4-tcb4r
Namespace:    default
Image(s):    openshift/origin-docker-registry:v1.1.3
Node:        ip-172-18-3-168.ec2.internal/172.18.3.168
Start Time:    Mon, 29 Feb 2016 04:34:41 -0500
Labels:        deployment=docker-registry-4,deploymentconfig=docker-registry,docker-registry=default
Status:        Running
Reason:        
Message:    
IP:        172.17.0.4
Controllers:    ReplicationController/docker-registry-4
Containers:
  registry:
    Container ID:    docker://8d5801bbb79ff64d1cdaa9b52df9711416c76aa05eb9117c0744b2fa09c53b4a
    Image:        openshift/origin-docker-registry:v1.1.3
    Image ID:        docker://f5cda3324e05764b6e07f7e42a6c94b83740eea127be542887d134550888fa57
    Port:        5000/TCP
    QoS Tier:
      cpu:        BestEffort
      memory:        BestEffort
    State:        Waiting
      Reason:        CrashLoopBackOff
    Last State:        Terminated
      Reason:        Error
      Exit Code:    2
      Started:        Mon, 29 Feb 2016 04:40:54 -0500
      Finished:        Mon, 29 Feb 2016 04:40:54 -0500
    Ready:        False
    Restart Count:    6
    Liveness:        http-get http://:5000/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:        http-get http://:5000/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment Variables:
Volumes:
  registry-storage:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:    
  config:
    Type:    Secret (a secret that should populate this volume)
    SecretName:    registry
  default-token-a6n2g:
    Type:    Secret (a secret that should populate this volume)
    SecretName:    default-token-a6n2g
Events:
  FirstSeen    LastSeen    Count    From                    SubobjectPath            Type        Reason        Message
  ---------    --------    -----    ----                    -------------            --------    ------        -------
  8m        8m        1    {default-scheduler }                            Normal        Scheduled    Successfully assigned docker-registry-4-tcb4r to ip-172-18-3-168.ec2.internal
  8m        8m        1    {kubelet ip-172-18-3-168.ec2.internal}    spec.containers{registry}    Normal        Created        Created container with docker id 42d4229b0962
  8m        8m        1    {kubelet ip-172-18-3-168.ec2.internal}    spec.containers{registry}    Normal        Started        Started container with docker id 42d4229b0962
  8m        8m        1    {kubelet ip-172-18-3-168.ec2.internal}    spec.containers{registry}    Normal        Created        Created container with docker id 0317481d9966
  8m        8m        1    {kubelet ip-172-18-3-168.ec2.internal}    spec.containers{registry}    Normal        Started        Started container with docker id 0317481d9966
  8m        8m        3    {kubelet ip-172-18-3-168.ec2.internal}                    Warning        FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "registry" with CrashLoopBackOff: "Back-off 10s restarting failed container=registry pod=docker-registry-4-tcb4r_default(a8c874c5-dec7-11e5-94e4-0ea75da7eeb5)"
  8m    10s    42    {kubelet ip-172-18-3-168.ec2.internal}    spec.containers{registry}    Warning    BackOff    Back-off restarting failed docker container


[root@ip-172-18-3-168 amd64]# docker logs 8d5801bbb79f
time="2016-02-29T09:40:54.572892424Z" level=info msg="version=v2.1.0+unknown" 
panic: Invalid region provided: { { 0}   false false         { 0}  { 0}    }

goroutine 1 [running]:
github.com/docker/distribution/registry/handlers.NewApp(0x7f013ffc96b8, 0x1c4e280, 0xc20822fb80, 0x7f013ffc96b8)
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/docker/distribution/registry/handlers/app.go:105 +0x3e1
github.com/openshift/origin/pkg/cmd/dockerregistry.Execute(0x7f013ffbd6f8, 0xc20803e390)
    /go/src/github.com/openshift/origin/pkg/cmd/dockerregistry/dockerregistry.go:55 +0x4d0
main.main()
    /go/src/github.com/openshift/origin/cmd/dockerregistry/main.go:43 +0x30a

goroutine 5 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0x1c4ec00)
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/golang/glog/glog.go:879 +0x78
created by github.com/golang/glog.init·1
    /go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/golang/glog/glog.go:410 +0x2a7

goroutine 15 [syscall]:
os/signal.loop()
    /usr/lib/golang/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
    /usr/lib/golang/src/os/signal/signal_unix.go:27 +0x35

Expected results:
The integrated docker-registry should run successfully.



Additional info:
By s3cmd tools use the accesskey and secretkey can access the s3 bucket.
Comment 2 Michal Minar 2016-02-29 09:35:30 EST
You specified invalid region. To see the supported ones, check out [1]. I don't see it documented anywhere. Which guide did you follow? Would it be sufficient to update it with the regions info?

I don't find it wrong panicking on invalid configuration.

[1] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
Comment 3 zhou ying 2016-03-01 04:20:27 EST
Hi 
  I followed by this https://github.com/openshift/openshift-ansible/blob/master/playbooks/adhoc/s3_registry/s3_registry.j2,
when use the s3cmd tool, use the default region:US , can access the S3 bucket successfully.
Comment 4 Michal Minar 2016-03-01 09:00:36 EST
In the help of s3cmd tool [1], I see:

    Region to create bucket in. As of now the regions are: us-east-1, us-west-1, us-west-2, eu-west-1, eu-central-1, ap-northeast-1, ap-southeast-1, ap-southeast-2, sa-east-1")

Where do we refer to `US` value as the one to use?

[1] https://github.com/s3tools/s3cmd/blob/d10f40954141bcf800f7f2e7d647cbd68c6979af/s3cmd#L2547
Comment 5 zhou ying 2016-03-02 00:28:20 EST
Hi Michal
   Use the 'us-east-1' as region, then the pod can run, but when push image to registry, met error:
Received unexpected HTTP status: 500 Internal Server Error

The logs from container:
time="2016-03-02T05:07:55.24200209Z" level=debug msg="s3.GetContent(\"/docker/registry/v2/repositories/zhouy/test/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/link\")" go.version=go1.4.2 http.request.host="172.30.149.71:5000" http.request.id=33e1c683-ff39-4d8d-96de-581f7cb2db54 http.request.method=HEAD http.request.remoteaddr="172.18.9.22:35795" http.request.uri="/v2/zhouy/test/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" http.request.useragent="docker/1.9.1 go/go1.4.2 kernel/3.10.0-229.7.2.el7.x86_64 os/linux arch/amd64" instance.id=87f5d491-049a-44a3-b349-8bd8ad12f2e8 trace.duration=29.689949ms trace.file="/go/src/github.com/openshift/origin/Godeps/_workspace/src/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/docker/distribution/registry/storage/driver/base.(*Base).GetContent" trace.id=3367a3e4-e2c1-4073-86bd-08fa1f2e2c87 trace.line=82 vars.digest="sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" vars.name="zhouy/test" 
time="2016-03-02T05:07:55.242165259Z" level=error msg="response completed with error" err.code=UNKNOWN err.detail="s3: The request signature we calculated does not match the signature you provided. Check your key and signing method." err.message="unknown error" go.version=go1.4.2 http.request.host="172.30.149.71:5000" http.request.id=33e1c683-ff39-4d8d-96de-581f7cb2db54 http.request.method=HEAD http.request.remoteaddr="172.18.9.22:35795" http.request.uri="/v2/zhouy/test/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" http.request.useragent="docker/1.9.1 go/go1.4.2 kernel/3.10.0-229.7.2.el7.x86_64 os/linux arch/amd64" http.response.contenttype="application/json; charset=utf-8" http.response.duration=50.250752ms http.response.status=500 http.response.written=397 instance.id=87f5d491-049a-44a3-b349-8bd8ad12f2e8 vars.digest="sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" vars.name="zhouy/test"
Comment 6 Michail Kargakis 2016-03-16 11:50:56 EDT
So it seems that this is not a bug and is superseded by https://bugzilla.redhat.com/show_bug.cgi?id=1315948. I am closing this in favor of https://bugzilla.redhat.com/show_bug.cgi?id=1315948.

Note You need to log in before you can comment on or make changes to this bug.