Description of problem: On the DUAL NIC setup i kill the phc2sys process and i see and addtional event is generated for the master inteface. Version-Release number of selected component (if applicable): Client Version: 4.8.0 Server Version: 4.11.0-0.nightly-2022-05-25-193227 Kubernetes Version: v1.23.3+ad897c4 How reproducible: always Steps to Reproduce: 1.login to the host 2.run logs for events and linux 3.kill the phc2sys Actual results: Events are generated for both realtime clock and master interface. Expected results: Events should be generated only for realtime clock Additional info: phc2sys[9382.307]: [ptp4l.1.config] CLOCK_REALTIME phc offset 8 s2 freq -87153 delay 499 E0606 21:32:58.021528 224694 daemon.go:445] cmdRun() error waiting for phc2sys: signal: killed I0606 21:32:58.021588 224694 daemon.go:334] phc2sys[1654551178]:[ptp4l.1.config] PTP_PROCESS_STATUS:0 12:33 `ime="2022-06-06T21:32:58Z" level=info msg=" publishing event for (profile bc2) ens1fx/master with last state LOCKED and current clock state FREERUN and offset -9999999999999999 for ( Max/Min Threshold 100/-100 )" time="2022-06-06T21:32:58Z" level=debug msg="event sent {\n \"id\": \"032a2dec-7177-4972-bd0f-fc332e773ff4\",\n \"type\": \"event.sync.ptp-status.ptp-state-change\",\n \"source\": \"/cluster/cnfde4.ptp.lab.eng.bos.redhat.com/ptp/ens1fx/master\",\n \"dataContentType\": \"application/json\",\n \"time\": \"2022-06-06T21:32:58.022065112Z\",\n \"data\": {\n \"version\": \"v1\",\n \"values\": [\n {\n \"resource\": \"/sync/ptp-status/lock-state\",\n \"dataType\": \"notification\",\n \"valueType\": \"enumeration\",\n \"value\": \"FREERUN\"\n },\n {\n \"resource\": \"/sync/ptp-status/lock-state\",\n \"dataType\": \"metric\",\n \"valueType\": \"decimal64.3\",\n \"value\": \"-1e+16\"\n }\n ]\n }\n }" time="2022-06-06T21:32:58Z" level=info msg=" publishing event for (profile bc2) CLOCK_REALTIME with last state LOCKED and current clock state FREERUN and offset -9999999999999999 for ( Max/Min Threshold 100/-100 )" time="2022-06-06T21:32:58Z" level=debug msg="event sent {\n \"id\": \"1edd0566-df86-4ec4-9077-23e8972cb334\",\n \"type\": \"event.sync.sync-status.os-clock-sync-state-change\",\n \"source\": \"/cluster/cnfde4.ptp.lab.eng.bos.redhat.com/ptp/CLOCK_REALTIME\",\n \"dataContentType\": \"application/json\",\n \"time\": \"2022-06-06T21:32:58.022911658Z\",\n \"data\": {\n \"version\": \"v1\",\n \"values\": [\n {\n \"resource\": \"/sync/sync-status/os-clock-sync-state\",\n \"dataType\": \"notification\",\n \"valueType\": \"enumeration\",\n \"value\": \"FREERUN\"\n },\n {\n \"resource\": \"/sync/sync-status/os-clock-sync-state\",\n \"dataType\": \"metric\",\n \"valueType\": \"decimal64.3\",\n \"value\": \"-1e+16\"\n }\n ]\n }\n }" time="2022-06-06T21:32:58Z" level=info msg=" publishing event for ( profile bc2) ens1fx/master with last state FREERUN and current clock state LOCKED and offset -5 for ( Max/Min Threshold 100/-100 )" time="2022-06-06T21:32:58Z" level=debug msg="event sent {\n \"id\": \"032a2dec-7177-4972-bd0f-fc332e773ff4\",\n \"type\": \"event.sync.ptp-status.ptp-state-change\",\n \"source\": \"/cluster/cnfde4.ptp.lab.eng.bos.redhat.com/ptp/ens1fx/master\",\n \"dataContentType\": \"application/json\",\n \"time\": \"2022-06-06T21:32:58.037504204Z\",\n \"data\": {\n \"version\": \"v1\",\n \"values\": [\n {\n \"resource\": \"/sync/ptp-status/lock-state\",\n \"dataType\": \"notification\",\n \"valueType\": \"enumeration\",\n \"value\": \"LOCKED\"\n },\n {\n \"resource\": \"/sync/ptp-status/lock-state\",\n \"dataType\": \"metric\",\n \"valueType\": \"decimal64.3\",\n \"value\": \"-5\"\n }\n ]\n }\n }" time="2022-06-06T21:32:59Z" level=error msg="error reading socket input, retrying" time="2022-06-06T21:32:59Z" level=info msg=" publishing event for ( profile bc2) CLOCK_REALTIME with last state FREERUN and current clock state LOCKED and offset -5 for ( Max/Min Threshold 100/-100 )" time="2022-06-06T21:32:59Z" level=debug msg="event sent {\n \"id\": \"1edd0566-df86-4ec4-9077-23e8972cb334\",\n \"type\": \"event.sync.sync-status.os-clock-sync-state-change\",\n \"source\": \"/cluster/cnfde4.ptp.lab.eng.bos.redhat.com/ptp/CLOCK_REALTIME\",\n \"dataContentType\": \"application/json\",\n \"time\": \"2022-06-06T21:32:59.15369364Z\",\n \"data\": {\n \"version\": \"v1\",\n \"values\": [\n {\n \"resource\": \"/sync/sync-status/os-clock-sync-state\",\n \"dataType\": \"notification\",\n \"valueType\": \"enumeration\",\n \"value\": \"LOCKED\"\n },\n {\n \"resource\": \"/sync/sync-status/os-clock-sync-state\",\n \"dataType\": \"metric\",\n \"valueType\": \"decimal64.3\",\n \"value\": \"-5\"\n }\n ]\n }\n }" time="2022-06-06T21:33:00Z" level=error msg="failed to send(TO): /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/ptp-status/lock-state result context deadline exceeded " time="2022-06-06T21:33:00Z" level=debug msg="posting event status FAILED to publisher /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/ptp-status/lock-state" time="2022-06-06T21:33:00Z" level=error msg="failed to send(TO): /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/sync-status/os-clock-sync-state result context deadline exceeded " time="2022-06-06T21:33:00Z" level=debug msg="posting event status FAILED to publisher /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/sync-status/os-clock-sync-state" time="2022-06-06T21:33:00Z" level=error msg="failed to send(TO): /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/ptp-status/lock-state result context deadline exceeded " time="2022-06-06T21:33:00Z" level=debug msg="posting event status FAILED to publisher /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/ptp-status/lock-state" time="2022-06-06T21:33:01Z" level=error msg="failed to send(TO): /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/sync-status/os-clock-sync-state result context deadline exceeded " time="2022-06-06T21:33:01Z" level=debug msg="posting event status FAILED to publisher /cluster/node/cnfde4.ptp.lab.eng.bos.redhat.com/sync/sync-status/os-clock-sync-state"
obochan Could you help verify this bug since 06-24 is code freeze day.
issue wasn't repo on the latest [obochan@obochan ~]$ oc version 3Client Version: 4.10.18 Server Version: 4.11.0-fc.3 Kubernetes Version: v1.24.0+284d62a
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069