Bug 1669080
Summary: | CNS installation failed with "Unable to add device: Device /dev/vsda not found." | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qin Ping <piqin> |
Component: | Installer | Assignee: | Jose A. Rivera <jarrpa> |
Installer sub component: | openshift-ansible | QA Contact: | Qin Ping <piqin> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | high | ||
Priority: | urgent | CC: | akostadi, jarrpa, maschmid, ndevos, piqin, vrutkovs |
Version: | 3.11.0 | ||
Target Milestone: | --- | ||
Target Release: | 3.11.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-03-14 02:17:59 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1668316, 1668335 |
Description
Qin Ping
2019-01-24 09:44:15 UTC
Does the problem only occur when the specified glusterfs device is a symlink? I believe this PR should resolve things: https://github.com/openshift/openshift-ansible/pull/11068 What version of OCS are you installing? Anything before ocs-3.11.1 should work with the PR from comment #2, ocs-3.11.1 and newer are expected to work with openshift-ansible-3.11.72. Qin Ping is reporting 3.11.72 in description. Past failures I see show it happens with v3.11.59. (In reply to Niels de Vos from comment #3) > What version of OCS are you installing? Anything before ocs-3.11.1 should > work with the PR from comment #2, ocs-3.11.1 and newer are expected to work > with openshift-ansible-3.11.72. The inventory has this: openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7 And https://access.redhat.com/containers/?tab=overview#/registry.access.redhat.com/rhgs3/rhgs-server-rhel7 currently lists 3.11.0-6 as version. This means that https://github.com/openshift/openshift-ansible/pull/11068 is expected to address the problem. You would need to provide the :3.11.0-6 tag to openshift_storage_glusterfs_image as (the working) :latest is not released and available from registry.access.redhat.com yet. Agree. Workaround: if you have an openshift-ansible version that consumes the HOST_DEV_DIR variable in the template, you can set it to "/dev" when it get processed. That is what https://github.com/openshift/openshift-ansible/pull/11068/files does too. Move to ON_QA as referenced PR is in openshift-ansible-3.11.74-1 and later Still got the same error. # oc version oc v3.11.74 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO openshift v3.11.74 kubernetes v1.11.0+d4cacc0 # oc exec glusterfs-storage-8pw87 -- rpm -qa|grep gluster python2-gluster-3.12.2-25.el7rhgs.x86_64 glusterfs-server-3.12.2-25.el7rhgs.x86_64 gluster-block-0.2.1-28.el7rhgs.x86_64 glusterfs-api-3.12.2-25.el7rhgs.x86_64 glusterfs-cli-3.12.2-25.el7rhgs.x86_64 glusterfs-fuse-3.12.2-25.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-25.el7rhgs.x86_64 glusterfs-libs-3.12.2-25.el7rhgs.x86_64 glusterfs-3.12.2-25.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-25.el7rhgs.x86_64 [glusterfs] host-1 ansible_user=root ansible_ssh_user=root glusterfs_devices="['/dev/vsda']" host-2 ansible_user=root ansible_ssh_user=root glusterfs_devices="['/dev/vsda']" host-3 ansible_user=root ansible_ssh_user=root glusterfs_devices="['/dev/vsda']" Please reproduce the issue and grab the output of "oc logs <heketi_pod>". Sorry for the last comment. I re-installed the CNS today, it succeeded. So mark this as verified. Verified this: openshift-ansible-3.11.74-1.git.0.cde4c69.el7.noarch.rpm Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0407 |