Is there a way to run the scenario again to produce the relevant log files? 1. Enable logging of the scheduler by: oc patch kubescheduler cluster --type='merge' -p "$(cat <<- EOF spec: logLevel: TraceAll EOF )" 2. give the cluster a few minutes to propagate the config change 3. run the scenario 4. get the log files of the scheduler via for pod in $(oc get -n openshift-kube-scheduler pods -o name | grep kube-scheduler | grep -v guard) ; do (oc logs -n openshift-kube-scheduler $pod kube-scheduler > ${pod/pod\/}.log ) ; done 5. ensure that the logfiles contains the relevant information via grep "score=" openshift-kube-scheduler-xxxx-master-y.log
Hi Guy, Could you give it a try and help in reproducing this bug. Thanks