Bug 1660280
Summary: | Block device creation fails "Create Block Volume Failed:failed to configure on xxx" in OCS 3.11.1 OCP 3.11.51 | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Neha Berry <nberry> |
Component: | rhgs-server-container | Assignee: | Niels de Vos <ndevos> |
Status: | CLOSED ERRATA | QA Contact: | Neha Berry <nberry> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | ocs-3.11 | CC: | amukherj, asambast, bgoyal, dapark, hchiramm, jarrpa, jmulligan, knarra, kramdoss, madam, mszczewski, mtaru, nberry, ndevos, nick, nschuetz, pdwyer, pkarampu, pprakash, prasanna.kalever, puebele, rhs-bugs, sankarshan, sarora, suprasad, tparsons, vbellur, vinug, xiubli |
Target Milestone: | --- | Keywords: | Regression, TestBlocker, ZStream |
Target Release: | OCS 3.11.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ocs/rhgs-server-rhel7:3.11.1-5 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-02-07 04:12:47 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1662312 | ||
Bug Blocks: | 1644160 |
Description
Neha Berry
2018-12-18 03:30:37 UTC
doing a PoC now with a customer and we are hitting this issue exactly. Please advise... The rhgs-server container image needs to get the updated version of update-params.sh that configures the /dev rbind-mount. The change has been posted upstream as PR#115. The current version of the script is at https://github.com/gluster/gluster-containers/blob/45497f475a9ff008e35dc7da8bbd43e77ecdbcc2/CentOS/update-params.sh The changes to the daemonset explained in comment #15 and comment #16 will be included in cns-deploy through bug 1653571 and pushed into openshift-ansible (bug 1662312). still getting this error on the recently released v3.11.59. [kubeexec] ERROR 2019/01/10 20:15:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [bash -c "set -o pipefail && gluster-block delete vol_f022f1e90cb33b0e76fb4faa4295ed69/blockvol_fd50002df6d075d5c290b2bff0bd5e4e --json |tee /dev/stderr"] on glusterfs-storage-wxhlv: Err[command terminated with exit code 2]: Stdout [{ "RESULT": "FAIL", "errCode": 2, "errMsg": "block vol_f022f1e90cb33b0e76fb4faa4295ed69\/blockvol_fd50002df6d075d5c290b2bff0bd5e4e doesn't exist" } ]: Stderr [{ "RESULT": "FAIL", "errCode": 2, "errMsg": "block vol_f022f1e90cb33b0e76fb4faa4295ed69\/blockvol_fd50002df6d075d5c290b2bff0bd5e4e doesn't exist" } (In reply to Nicholas Nachefski from comment #35) > still getting this error on the recently released v3.11.59. > > > [kubeexec] ERROR 2019/01/10 20:15:48 > /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to > run command [bash -c "set -o pipefail && gluster-block delete > vol_f022f1e90cb33b0e76fb4faa4295ed69/ > blockvol_fd50002df6d075d5c290b2bff0bd5e4e --json |tee /dev/stderr"] on > glusterfs-storage-wxhlv: Err[command terminated with exit code 2]: Stdout [{ > "RESULT": "FAIL", "errCode": 2, "errMsg": "block > vol_f022f1e90cb33b0e76fb4faa4295ed69\/ > blockvol_fd50002df6d075d5c290b2bff0bd5e4e doesn't exist" } > ]: Stderr [{ "RESULT": "FAIL", "errCode": 2, "errMsg": "block > vol_f022f1e90cb33b0e76fb4faa4295ed69\/ > blockvol_fd50002df6d075d5c290b2bff0bd5e4e doesn't exist" } That may be a non-fatal error from when it tries to clean up after a create volume error. Was there an earlier error in the logs for a create command? Hi, The block volume creation is successful in OCP 3.11.67-1 and OCS 3.11.1 (latest available builds) This looks like it might be present as an issue in 3.11.43 as well so it may have been introduced earlier than originally thought. (In reply to Mark Szczewski from comment #39) > This looks like it might be present as an issue in 3.11.43 as well so it may > have been introduced earlier than originally thought. Customer made mistake when reporting the issue with 3.11.43. They were not running a complete teardown and used the 3.11.59 playbooks to do the install so the issue would have been presented. 3.11.43 does not show this issue! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:0287 |