https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/22858/pull-ci-openshift-origin-master-e2e-aws/8784#openshift-tests-k8sio-sig-node-mount-propagation-should-propagate-mounts-to-the-host-suiteopenshiftconformanceparallel-suitek8s Test is disabled, requires SSH (which we can't do anymore) so the test needs to be updated fail [k8s.io/kubernetes/test/e2e/node/mount_propagation.go:143]: Unexpected error: <*errors.errorString | 0xc002872c70>: { s: "failed running \"sudo mkdir \\\"/var/lib/kubelet/mount-propagation-2229\\\"/host; sudo mount -t tmpfs e2e-mount-propagation-host \\\"/var/lib/kubelet/mount-propagation-2229\\\"/host; echo host > \\\"/var/lib/kubelet/mount-propagation-2229\\\"/host/file\": error getting SSH client to core@:22: dial tcp :22: connect: connection refused (exit code 0, stderr )", } failed running "sudo mkdir \"/var/lib/kubelet/mount-propagation-2229\"/host; sudo mount -t tmpfs e2e-mount-propagation-host \"/var/lib/kubelet/mount-propagation-2229\"/host; echo host > \"/var/lib/kubelet/mount-propagation-2229\"/host/file": error getting SSH client to core@:22: dial tcp :22: connect: connection refused (exit code 0, stderr ) occurred
Whole point of the test is to check that mounts get propagated to the *host*. We could do dirty tricks with privileged pod + nsenter, but it would be better to fix ssh access to nodes - many tests use that. The test framework allows for KUBE_SSH_BASTION and similar env. variables. This worked for me with ssh bastion from https://github.com/eparis/ssh-bastion: KUBE_SSH_USER=core KUBE_SSH_KEY_PATH=~/.ssh/libra.pem KUBE_SSH_BASTION=a4fff603182c211e998df0697a41eb04-67285076.us-east-2.elb.amazonaws.com:22 go run hack/e2e.go -- --test --test_args="--ginkgo.focus=Mount.propagation -kubeconfig=$KUBECONFIG -host https://api.jsafrane-dev.devcluster.openshift.com:6443" --check-version-skew=false Can openshift-tests install the bastion and set couple of env. variables? Does e2e use "well-known" ssh key? I can see that KUBE_SSH_KEY_PATH is set to /tmp/cluster/ssh-privatekey, so is the bastion setup the only missing part?
Handling this over to jan.
Handing over to owners of ci-operator/templates/openshift/installer/cluster-launch-installer-e2e.yaml (I hope I got the right component). Some e2e tests need ssh access to hosts. There are traces of ssh setup in cluster-launch-installer-e2e: https://github.com/openshift/release/blob/ebeb6e337f0e6a9ce8bb4299834096038d3213fd/ci-operator/templates/openshift/installer/cluster-launch-installer-e2e.yaml#L143, we just to complete it so openshift-tests gets all the env. variables + bastion set up.
Sorry, as policy DPTP does not own the content of tests in that manner. Such an approach would not scale. Please feel free to make the changes you feel are appropriate and tag reviewers from the set of developers who have written and edited the template.
ssh bastion setup before tests start: https://github.com/openshift/release/pull/4161
This is not 4.2 blocker
Clayton does not like ssh bastion, so we're back at the beginning. Mount propagation test must be redesigned upstream not to use ssh.
Upstream PR: https://github.com/kubernetes/kubernetes/pull/82424
"[k8s.io] [sig-node] Mount propagation should propagate mounts to the host [Suite:openshift/conformance/parallel] [Suite:k8s]", the test passed in https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-4.3/1042 with 4.3.0-0.ci-2019-10-21-234502
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0062