Description of problem: During a cluster upgrade from OCP 4.3.33 to 4.4.19 the upgrade appears to freeze for no apparent reason. Most operators are on version 4.4.19 and no operators are in a Degraded state. Actual results: Cluster upgrade does not appear to be progressing and very little information is available for debugging. Expected results: Upgrade would complete or at least the source of the problem would be clear.
For example, current operator status is: $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.4.19 True False False 1d cloud-credential 4.4.19 True False False 6d cluster-autoscaler 4.4.19 True False False 6d console 4.4.19 True False False 1d csi-snapshot-controller 4.4.19 True False False 2d dns 4.3.33 True False False 4h53m etcd 4.4.19 True False False 1d image-registry 4.4.19 True False False 1s ingress 4.4.19 True False False 6d insights 4.4.19 True False False 14d kube-apiserver 4.4.19 True False False 1d kube-controller-manager 4.4.19 True False False 6d kube-scheduler 4.4.19 True False False 2d kube-storage-version-migrator 4.4.19 True False False 2d machine-api 4.4.19 True False False 6d machine-config 4.3.33 True False False 53m marketplace 4.4.19 True False False 1d monitoring 4.4.19 True False False 2d network 4.3.33 True False False 6d node-tuning 4.4.19 True False False 3d openshift-apiserver 4.4.19 True False False 1d openshift-controller-manager 4.4.19 True False False 2d openshift-samples 4.4.19 True False False 6d operator-lifecycle-manager 4.4.19 True False False 6d operator-lifecycle-manager-catalog 4.4.19 True False False 6d operator-lifecycle-manager-packageserver 4.4.19 True False False 1d service-ca 4.4.19 True False False 1d service-catalog-apiserver 4.4.19 True False False 1d service-catalog-controller-manager 4.4.19 True False False 1d storage 4.4.19 True False False 6d
4.4 issue, so should not block 4.6 GA. Haven't looked at the must-gather yet, but grepping for 'Result of work' should show the manifests that are sticking.