Bug 2211592
Summary: | [ODF 4.12] [GSS] unknown parameter name "FORCE_OSD_REMOVAL" | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Malay Kumar parida <mparida> |
Component: | ocs-operator | Assignee: | Malay Kumar parida <mparida> |
Status: | CLOSED ERRATA | QA Contact: | Elad <ebenahar> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.12 | CC: | amanzane, ebenahar, ikave, kramdoss, mparida, muagarwa, ocs-bugs, odf-bz-bot, sostapov |
Target Milestone: | --- | Keywords: | Automation |
Target Release: | ODF 4.12.5 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 4.12.5-1 | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | 2143944 | Environment: | |
Last Closed: | 2023-07-26 16:57:57 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2143944, 2211595 | ||
Bug Blocks: |
Description
Malay Kumar parida
2023-06-01 07:45:31 UTC
I tested the BZ with a vSphere cluster with OCP4.10 and ODF 4.9.10(lower than 4.9.11). I performed the following steps: 1. Checked the ocs osd removal job command, which resulted in the expected error: $ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=0 -p FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f - error: unknown parameter name "FORCE_OSD_REMOVAL" error: no objects passed to create 2. Upgrade the ODF from 4.9 to 4.10. 3. Check again ocs osd removal job command, which shows the expected output: $ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=0 -p FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f - job.batch/ocs-osd-removal-job created $ oc get jobs ocs-osd-removal-job NAME COMPLETIONS DURATION AGE ocs-osd-removal-job 1/1 32s 136m 4. Upgrade the OCP version from 4.10 to 4.11. 5. Upgrade the ODF from 4.10 to 4.11. 6. Check again ocs osd removal job command, which shows the expected output: $ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=0 -p FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f - Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") job.batch/ocs-osd-removal-job created $ oc get jobs ocs-osd-removal-job NAME COMPLETIONS DURATION AGE ocs-osd-removal-job 1/1 7s 22s 7. Upgrade the OCP from version 4.11 to 4.12. 8. Upgrade the ODF from 4.11 to 4.12. 9. Check again ocs osd removal job command, which shows the expected output: $ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=0 -p FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f - job.batch/ocs-osd-removal-job created $ oc get jobs ocs-osd-removal-job NAME COMPLETIONS DURATION AGE ocs-osd-removal-job 1/1 7s 12s Versions: OC version: Client Version: 4.10.24 Server Version: 4.12.0-0.nightly-2023-07-15-021657 Kubernetes Version: v1.25.11+1485cc9 OCS version: ocs-operator.v4.12.5-rhodf OpenShift Container Storage 4.12.5-rhodf ocs-operator.v4.11.9 Succeeded Cluster version NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2023-07-15-021657 True False 51m Cluster version is 4.12.0-0.nightly-2023-07-15-021657 Rook version: rook: v4.12.5-0.bc1e9806c3281090b58872e303e947ff5437c078 go: go1.18.10 Ceph version: ceph version 16.2.10-172.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.5 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:4287 |