Bug 1298849 - The re-encrypt route termination does not work as expected
The re-encrypt route termination does not work as expected
Status: CLOSED CURRENTRELEASE
Product: OpenShift Container Platform
Classification: Red Hat
Component: Routing (Show other bugs)
3.1.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Maru Newby
zhaozhanqi
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-15 04:26 EST by Meng Bo
Modified: 2016-01-29 15:58 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-29 15:58:15 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
haproxy stats (274.80 KB, image/png)
2016-01-15 15:03 EST, Paul Weil
no flags Details

  None (edit)
Description Meng Bo 2016-01-15 04:26:42 EST
Description of problem:
Try to create pod, secure svc and re-encrypt route, and try to access the backend via the reencrypt route. Will get 503 page.

Version-Release number of selected component (if applicable):
openshift v3.1.1.3
kubernetes v1.1.0-origin-1107-g4c8e6f4
haproxy-router image id: c022a86a526e

How reproducible:
always

Steps to Reproduce:
1. Modify SCC to add user to privileged group
2. Create pod, service and re-encrypt route
$ oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/routing/reencrypt/list_for_reencrypt.json
3. Try to access it via route after all the pods are ready

Actual results:
$ curl --resolve www.example2.com:443:10.14.6.137 https://www.example2.com/ -k
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>


Expected results:
Should be able to access the pod via re-encrypt route.


Additional info:
pod,svc,route info:
# oc get po,svc,route
NAME                   READY              STATUS        RESTARTS      AGE
hello-nginx-docker     1/1                Running       0             44m
hello-nginx-docker-2   1/1                Running       0             44m
NAME                   CLUSTER_IP         EXTERNAL_IP   PORT(S)       SELECTOR                  AGE
hello-nginx            172.30.63.140      <none>        27443/TCP     name=hello-nginx-docker   44m
NAME                   HOST/PORT          PATH          SERVICE       LABELS                    INSECURE POLICY   TLS TERMINATION
route-reencrypt        www.example2.com                 hello-nginx                                               reencrypt



Following info are gathered from router pod:
# cat os_reencrypt.map  
www.example2.com bmengp1_route-reencrypt

# cat haproxy.config
backend be_secure_bmengp1_route-reencrypt
  mode http
  option redispatch
  balance leastconn
  timeout check 5000ms
  cookie OPENSHIFT_REENCRYPT_bmengp1_route-reencrypt_SERVERID insert indirect no
cache httponly secure

  server 10.1.2.167:443 10.1.2.167:443 ssl check inter 5000ms verify required ca
-file /var/lib/containers/router/cacerts/bmengp1_route-reencrypt.pem cookie 10.1
.2.167:443

  server 10.1.3.139:443 10.1.3.139:443 ssl check inter 5000ms verify required ca
-file /var/lib/containers/router/cacerts/bmengp1_route-reencrypt.pem cookie 10.1
.3.139:443

# curl https://10.1.2.167:443 -k
Hello World
# curl https://10.1.3.139:443 -k
Hello Test 222

# curl https://172.30.63.140:27443 -k
Hello Test 222
Comment 1 Paul Weil 2016-01-15 15:03 EST
Created attachment 1115270 [details]
haproxy stats
Comment 2 Paul Weil 2016-01-15 15:07:53 EST
Hi Bo.  We had a recent issue where the certificates expired in the hello-nginx-docker project.  That was likely causing your backends to be disabled when the haproxy verify check was running.

I did a couple things to test.  First, I pulled one of the certs being used in the pod def to double check the validity:

[vagrant@localhost ~]$ openssl x509 -in test.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2 (0x2)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, ST=SC, L=Default City, O=Default Company Ltd, OU=Test CA, CN=www.exampleca.com/emailAddress=example@example.com
        Validity
            Not Before: Jan 13 13:34:02 2015 GMT
            Not After : Jan 13 13:34:02 2016 GMT
        Subject: CN=www.example2.com, ST=SC, C=SU/emailAddress=example@example.com, O=Example2, OU=Example2


Next, I rebuilt the pweil/hello-nginx-docker container after refreshing the chain cert.  After that I ran your json file and looked at the stats url (attached).  The bmeng/hello-nginx-docker pod is marked as down but the pweil/hello-nginx-docker pod is not.

Then I ran your json with 2 of my containers, edited one of the index.html files via oc exec and was able to successfully test a reencrypt route round robin between backend: https://gist.github.com/pweil-/60ea7d95adf6048b01d3

I've pushed the latest pweil/hello-nginx-docker container to the hub if you want to test with it and updated the chain cert in github if you want to build your own version.
Comment 3 Paul Weil 2016-01-15 15:20:53 EST
Just for completeness, the reason the backend was marked down in haproxy was Layer6 invalid response: SSL handshake failure (which led to looking at the pod certs that nginx was using).
Comment 4 Meng Bo 2016-01-18 02:21:25 EST
Paul, thanks very much.

It works well after pulled the latest image from pweil/hello-nginx-docker

# curl --resolve www.example2.com:443:10.14.6.143 https://www.example2.com/ -k
Hello World


Checked my old haproxy status page, it has the same error as you described.

Note You need to log in before you can comment on or make changes to this bug.