Bug 1695828 - [DOCS] Can't request audit logs from "nodes" (proxy broken).
Summary: [DOCS] Can't request audit logs from "nodes" (proxy broken).
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.1.0
Assignee: Michael Burke
QA Contact: zhou ying
Vikram Goyal
URL:
Whiteboard:
Depends On:
Blocks: 1664187
TreeView+ depends on / blocked
 
Reported: 2019-04-03 19:06 UTC by Eric Rich
Modified: 2019-05-10 15:58 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-10 15:58:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Eric Rich 2019-04-03 19:06:54 UTC
Description of problem: 

Trying to pull audit logs from a cluster does not seem to work! 

> oc get --raw /api/v1/nodes/$node/proxy/logs/audit

Fundamentally we don't seem to be able to hit anything behind the /proxy endpoint (as they 404). 

Version-Release number of selected component (if applicable):

> $ oc version
> Client Version: version.Info{Major:"4", Minor:"0+", GitVersion:"v4.0.22", GitCommit:"d14915559e", GitTreeState:"", BuildDate:"2019-03-14T21:55:38Z", GoVersion:"", Compiler:"", Platform:""}
> Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.4+0ba401e", GitCommit:"0ba401e", GitTreeState:"clean", BuildDate:"2019-03-31T22:28:12Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

> $ openshift-install version
> openshift-install v4.0.22-201903311754-dirty
> built from commit 977c4db80a8005fd0fd0cea26996a455d526201f

> $ oc get clusterversion
> NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
> version   4.0.0-0.nightly-2019-04-02-081046   True        False         28h     Cluster version is 4.0.0-0.nightly-2019-04-02-081046


How reproducible: 100% 


Steps to Reproduce:
1. $ for node in $(oc get nodes -l node-role.kubernetes.io/master -o name | awk -F/ '{print $2}'); do echo $node; oc get --raw /api/v1/nodes/$node/proxy/logs/audit; done


Actual results:

> Error from server (NotFound): the server could not find the requested resource
 
Is seen from the client when you make the request to get the audit logs (or any log). 


> E0403 18:55:34.658711       1 status.go:64] apiserver received an error that is not an metav1.Status: &field.Error{Type:"FieldValueInvalid", Field:"metadata.name", BadValue:"kube:admin", Detail:"may not contain \":\""}

This is seen when tailing the logs for all of the openshift-api server pods (when the request is made). 


Expected results:

The user should get back a stream of logs (that are the audit logs), pulled from /var/log/kube-apiserver/audit.log on the host. 


Additional info:

We do this in CI so it should just work! 
> https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-e2e.yaml#L404

Comment 4 Michael Burke 2019-04-24 20:00:26 UTC
Xingxing -- Please review the PR: https://github.com/openshift/openshift-docs/pull/14480

Comment 5 Xingxing Xia 2019-04-25 01:50:38 UTC
Ying, please help review, thx

Comment 6 zhou ying 2019-04-26 02:19:26 UTC
Has commented on the pr. Changed the status to follow the issue.

Comment 7 zhou ying 2019-05-10 06:37:27 UTC
Some typos , commented.


Note You need to log in before you can comment on or make changes to this bug.