Bug 1738690 - gcp e2e failing: [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials [Suite:openshift/conformance/parallel] [Suite:k8s]
Summary: gcp e2e failing: [sig-auth] [Feature:NodeAuthenticator] The kubelet's main po...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.2.0
Assignee: Seth Jennings
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-07 21:16 UTC by Cesar Wong
Modified: 2019-10-16 06:35 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:35:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 23577 0 None closed Bug 1738690: UPSTREAM: 76637: Add missing node.address != "" condition in tests 2020-05-19 22:24:12 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:35:25 UTC

Description Cesar Wong 2019-08-07 21:16:34 UTC
Description of problem:
e2e test fails on gcp:
[sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials [Suite:openshift/conformance/parallel] [Suite:k8s]

Version-Release number of selected component (if applicable):
4.2

How reproducible:
Always

Steps to Reproduce:
run e2e test against a gcp cluster

Actual results:
Test fails

Expected results:
Test succeeds

Additional info:

This test assumes that nodes contain external addresses. In the case of a GCP 4.2 cluster, these are empty, resulting in a RC=6 when calling curl:
https://github.com/openshift/origin/blob/c1e9f01c0d94615ad729e786ed9eb6e063834cd6/vendor/k8s.io/kubernetes/test/e2e/auth/node_authn.go#L45

Failure log:
Aug  6 19:12:48.905: INFO: Fetching cloud provider for "gce"
I0806 19:12:48.905647   15786 gce.go:877] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc0000ec018), conf:(*jwt.Config)(0xc0022cdd80)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
I0806 19:12:48.966167   15786 gce.go:877] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc0000ec018), conf:(*jwt.Config)(0xc0033caf80)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
I0806 19:12:48.987833   15786 gce.go:877] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc0000ec018), conf:(*jwt.Config)(0xc0033cb180)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W0806 19:12:49.013044   15786 gce.go:475] No network name or URL specified.
Aug  6 19:12:49.733: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exit status 1
Aug  6 19:12:49.733: INFO:  > ERROR: (gcloud.compute.instance-groups.list-instances) could not parse resource []
Aug  6 19:12:49.733: INFO:  > 
Aug  6 19:12:49.733: INFO: Cluster image sources lookup failed: exit status 1

Aug  6 19:12:49.733: INFO: >>> kubeConfig: /tmp/cluster/admin.kubeconfig
Aug  6 19:12:49.735: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Aug  6 19:12:49.764: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Aug  6 19:12:49.789: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Aug  6 19:12:49.789: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Aug  6 19:12:49.789: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Aug  6 19:12:49.797: INFO: e2e test version: v1.14.0+8e63b6d
Aug  6 19:12:49.798: INFO: kube-apiserver version: v1.14.0+8e63b6d
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:73
[BeforeEach] [sig-auth] [Feature:NodeAuthenticator]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Aug  6 19:12:49.800: INFO: >>> kubeConfig: /tmp/cluster/admin.kubeconfig
STEP: Building a namespace api object, basename node-authn
Aug  6 19:12:49.884: INFO: About to run a Kube e2e test, ensuring namespace is privileged
Aug  6 19:12:49.998: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-auth] [Feature:NodeAuthenticator]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/auth/node_authn.go:37
[It] The kubelet's main port 10250 should reject requests with no credentials [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/auth/node_authn.go:56
Aug  6 19:12:54.081: INFO: Running '/usr/bin/kubectl --server=https://api.cewong.installer-dev.cesarwong.com:6443 --kubeconfig=/tmp/cluster/admin.kubeconfig exec --namespace=e2e-node-authn-7837 test-node-authn-h7c5r -- /bin/sh -x -c curl -sIk -o /dev/null -w '%{http_code}' https://:10250/metrics'
Aug  6 19:12:54.468: INFO: rc: 6
Aug  6 19:12:54.468: INFO: stdout: 
Aug  6 19:12:54.468: INFO: Unexpected error occurred: error running &{/usr/bin/kubectl [kubectl --server=https://api.cewong.installer-dev.cesarwong.com:6443 --kubeconfig=/tmp/cluster/admin.kubeconfig exec --namespace=e2e-node-authn-7837 test-node-authn-h7c5r -- /bin/sh -x -c curl -sIk -o /dev/null -w '%{http_code}' https://:10250/metrics] []  <nil> 000 + curl -sIk -o /dev/null -w %{http_code} https://:10250/metrics
command terminated with exit code 6
 [] <nil> 0xc003826e40 exit status 6 <nil> <nil> true [0xc000a1bbe8 0xc000a1bc00 0xc000a1bc18] [0xc000a1bbe8 0xc000a1bc00 0xc000a1bc18] [0xc000a1bbf8 0xc000a1bc10] [0x93f720 0x93f720] 0xc0023928a0 <nil>}:
Command stdout:
000
stderr:
+ curl -sIk -o /dev/null -w %{http_code} https://:10250/metrics
command terminated with exit code 6

error:
exit status 6

[AfterEach] [sig-auth] [Feature:NodeAuthenticator]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Collecting events from namespace "e2e-node-authn-7837".
STEP: Found 4 events.
Aug  6 19:12:54.477: INFO: At 2019-08-06 19:12:50 +0000 UTC - event for test-node-authn-h7c5r: {default-scheduler } Scheduled: Successfully assigned e2e-node-authn-7837/test-node-authn-h7c5r to cewong-tdbqf-w-c-87n5p.c.openshift-dev-installer.internal
Aug  6 19:12:54.477: INFO: At 2019-08-06 19:12:52 +0000 UTC - event for test-node-authn-h7c5r: {kubelet cewong-tdbqf-w-c-87n5p.c.openshift-dev-installer.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/hostexec:1.1" already present on machine
Aug  6 19:12:54.477: INFO: At 2019-08-06 19:12:53 +0000 UTC - event for test-node-authn-h7c5r: {kubelet cewong-tdbqf-w-c-87n5p.c.openshift-dev-installer.internal} Created: Created container test-node-authn
Aug  6 19:12:54.477: INFO: At 2019-08-06 19:12:53 +0000 UTC - event for test-node-authn-h7c5r: {kubelet cewong-tdbqf-w-c-87n5p.c.openshift-dev-installer.internal} Started: Started container test-node-authn
Aug  6 19:12:54.487: INFO: POD                    NODE                                                       PHASE    GRACE  CONDITIONS
Aug  6 19:12:54.487: INFO: test-node-authn-h7c5r  cewong-tdbqf-w-c-87n5p.c.openshift-dev-installer.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-06 19:12:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-06 19:12:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-06 19:12:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-06 19:12:50 +0000 UTC  }]
Aug  6 19:12:54.487: INFO: 
Aug  6 19:12:54.499: INFO: skipping dumping cluster info - cluster too large
Aug  6 19:12:54.499: INFO: Waiting up to 3m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-node-authn-7837" for this suite.
Aug  6 19:13:42.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  6 19:13:44.559: INFO: namespace e2e-node-authn-7837 deletion completed in 50.052591143s
Aug  6 19:13:44.568: INFO: Running AfterSuite actions on all nodes
Aug  6 19:13:44.568: INFO: Running AfterSuite actions on node 1
fail [k8s.io/kubernetes/test/e2e/framework/util.go:3443]: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/usr/bin/kubectl [kubectl --server=https://api.cewong.installer-dev.cesarwong.com:6443 --kubeconfig=/tmp/cluster/admin.kubeconfig exec --namespace=e2e-node-authn-7837 test-node-authn-h7c5r -- /bin/sh -x -c curl -sIk -o /dev/null -w '%{http_code}' https://:10250/metrics] []  <nil> 000 + curl -sIk -o /dev/null -w %{http_code} https://:10250/metrics\ncommand terminated with exit code 6\n [] <nil> 0xc003826e40 exit status 6 <nil> <nil> true [0xc000a1bbe8 0xc000a1bc00 0xc000a1bc18] [0xc000a1bbe8 0xc000a1bc00 0xc000a1bc18] [0xc000a1bbf8 0xc000a1bc10] [0x93f720 0x93f720] 0xc0023928a0 <nil>}:\nCommand stdout:\n000\nstderr:\n+ curl -sIk -o /dev/null -w %{http_code} https://:10250/metrics\ncommand terminated with exit code 6\n\nerror:\nexit status 6\n",
        },
        Code: 6,
    }
    error running &{/usr/bin/kubectl [kubectl --server=https://api.cewong.installer-dev.cesarwong.com:6443 --kubeconfig=/tmp/cluster/admin.kubeconfig exec --namespace=e2e-node-authn-7837 test-node-authn-h7c5r -- /bin/sh -x -c curl -sIk -o /dev/null -w '%{http_code}' https://:10250/metrics] []  <nil> 000 + curl -sIk -o /dev/null -w %{http_code} https://:10250/metrics
    command terminated with exit code 6
     [] <nil> 0xc003826e40 exit status 6 <nil> <nil> true [0xc000a1bbe8 0xc000a1bc00 0xc000a1bc18] [0xc000a1bbe8 0xc000a1bc00 0xc000a1bc18] [0xc000a1bbf8 0xc000a1bc10] [0x93f720 0x93f720] 0xc0023928a0 <nil>}:
    Command stdout:
    000
    stderr:
    + curl -sIk -o /dev/null -w %{http_code} https://:10250/metrics
    command terminated with exit code 6
    
    error:
    exit status 6
    
occurred

failed: (57s) 2019-08-06T19:13:44 "[sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials [Suite:openshift/conformance/parallel] [Suite:k8s]"

Comment 1 Seth Jennings 2019-08-08 18:17:05 UTC
This is fixed by upstream https://github.com/kubernetes/kubernetes/pull/76637

Comment 4 errata-xmlrpc 2019-10-16 06:35:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.