Bug 1270474 - [clusterinfra_public_136]Can't record kube-proxy birthCry event when start openshift
[clusterinfra_public_136]Can't record kube-proxy birthCry event when start op...
Status: CLOSED CURRENTRELEASE
Product: OpenShift Origin
Classification: Red Hat
Component: Containers (Show other bugs)
3.x
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Paul Weil
chaoyang
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-10 05:05 EDT by DeShuai Ma
Modified: 2015-11-23 16:14 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-23 16:14:35 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description DeShuai Ma 2015-10-10 05:05:54 EDT
Description of problem:
When start openshift can't record kube-proxy birthCry event.

Version-Release number of selected component (if applicable):
openshift v1.0.6-328-gdf1f19e
kubernetes v1.1.0-alpha.1-653-g86b4e77

How reproducible:
Always

Steps to Reproduce:
1.Start openshift 
[fedora@ip-172-18-2-238 sample-app]$ sudo /data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/openshift start --public-master=ec2-54-88-24-75.compute-1.amazonaws.com --images='openshift/origin-${component}:latest' --loglevel=5&> logs/openshift.log &
[1] 30551

2.Check the event
[fedora@ip-172-18-2-238 db-templates]$ oc get event
FIRSTSEEN   LASTSEEN   COUNT     NAME                       KIND                    SUBOBJECT                           REASON             SOURCE                      MESSAGE
52m         52m        1         ip-172-18-2-238            Node                                                        NodeReady          {kubelet ip-172-18-2-238}   Node ip-172-18-2-238 status is now: NodeReady
52m         52m        1         ip-172-18-2-238            Node                                                        Starting           {kubelet ip-172-18-2-238}   Starting kubelet.
51m         51m        1         ip-172-18-2-238            Node                                                        RegisteredNode     {controllermanager }        Node ip-172-18-2-238 event: Registered Node ip-172-18-2-238 in NodeController

Actual results:
2.No kube-proxy birthCry event "Starting kube-proxy."

Expected results:
2.Should contain kube-proxy birthCry event.

Additional info:
For prue kube event:
[root@dma bin]# ./kubectl get event
FIRSTSEEN   LASTSEEN   COUNT     NAME        KIND      SUBOBJECT   REASON           SOURCE                 MESSAGE
6s          6s         1         dma         Node                  Starting         {kube-proxy dma}       Starting kube-proxy.
2s          2s         1         127.0.0.1   Node                  Starting         {kubelet 127.0.0.1}    Starting kubelet.
2s          2s         1         127.0.0.1   Node                  NodeReady        {kubelet 127.0.0.1}    Node 127.0.0.1 status is now: NodeReady
1s          1s         1         127.0.0.1   Node                  RegisteredNode   {controllermanager }   Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController
Comment 1 Avesh Agarwal 2015-10-16 08:21:29 EDT
I debugged the issue and I am able to reproduce it. The issue is that openshift has a different API to start its proxy: RunProxy in ./pkg/cmd/server/kubernetes/node.go . It does not implement birthcry event.

Whereas kubernetes proxy is started with NewProxyCommand and this takes the same execution path as pure kube proxy.

You could get proxy's birthcry event if you start your proxy on node as follow:

[root@localhost origin]# _output/local/bin/linux/amd64/openshift start kubernetes proxy --logtostderr=true --v=10 --master=https://192.168.122.217:8443 --kubeconfig=./openshift.local.config/node-192.168.122.217/node.kubeconfig
Comment 2 Avesh Agarwal 2015-10-16 13:21:18 EDT
I have sent a patch to origin to address this:

https://github.com/openshift/origin/pull/5165
Comment 3 Paul Weil 2015-10-19 11:21:02 EDT
I believe that this may be resolved in the rebase by the use of NewProxyServerDefault which provides a recorder. Will double check after it lands
Comment 4 Paul Weil 2015-10-20 12:23:02 EDT
PR linked above submitted to merge queue
Comment 5 DeShuai Ma 2015-10-20 21:52:08 EDT
[fedora@ip-172-18-12-2 sample-app]$ openshift version
openshift v1.0.6-803-g7c12c7a
kubernetes v1.2.0-alpha.1-1107-g4c8e6f4
etcd 2.1.2

1.Start openshift and check kube-proxy.
[fedora@ip-172-18-12-2 sample-app]$ oc get event
FIRSTSEEN   LASTSEEN   COUNT     NAME             KIND      SUBOBJECT   REASON           SOURCE                        MESSAGE
23s         23s        1         ip-172-18-12-2   Node                  Starting         {kube-proxy ip-172-18-12-2}   Starting kube-proxy.
19s         19s        1         ip-172-18-12-2   Node                  Starting         {kubelet ip-172-18-12-2}      Starting kubelet.
19s         19s        1         ip-172-18-12-2   Node                  NodeReady        {kubelet ip-172-18-12-2}      Node ip-172-18-12-2 status is now: NodeReady
18s         18s        1         ip-172-18-12-2   Node                  RegisteredNode   {controllermanager }          Node ip-172-18-12-2 event: Registered Node ip-172-18-12-2 in NodeController

Note You need to log in before you can comment on or make changes to this bug.