Bug 2060697
| Summary: | [AWS] partitionNumber cannot work for specifying Partition number | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Huali Liu <huliu> | ||||
| Component: | Cloud Compute | Assignee: | Joel Speed <jspeed> | ||||
| Cloud Compute sub component: | Other Providers | QA Contact: | Huali Liu <huliu> | ||||
| Status: | CLOSED ERRATA | Docs Contact: | |||||
| Severity: | medium | ||||||
| Priority: | medium | ||||||
| Version: | 4.11 | ||||||
| Target Milestone: | --- | ||||||
| Target Release: | 4.11.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2022-08-10 10:52:11 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Huali Liu
2022-03-04 02:22:44 UTC
More to do here before this is able to go through QE, need to update vendor directories in MAPA and MAO Waiting for available nightly build to verify bug. Hi Joel,
I tried to verify this bug on 4.11.0-0.nightly-2022-03-08-191358, but there are two problems about it. Please help to take a look. Thanks!
1. "partitionNumber" will be removed, I need to:
Disable CVO: oc scale deployment -n openshift-cluster-version cluster-version-operator --replicas 0
Disable MAO: oc scale deployment -n openshift-machine-api machine-api-operator --replicas 0
Delete the mutatingwebhookconfiguration: oc delete mutatingwebhookconfiguration machine-api
then "partitionNumber" will not be removed, but other fields, for example
group:
name: partitionpg
don't need to do this.
2. "partitionNumber" still cannot work, I create 3 machineset:
first time
placement:
availabilityZone: us-east-2b
region: us-east-2
group:
name: partitionpg
partitionNumber: 3
machine created successfully, check on AWS console "Partition number" shows 1. But it should be 3.
second time
placement:
availabilityZone: us-east-2b
region: us-east-2
group:
name: partitionpg
partitionNumber: 3
machine created successfully, check on AWS console "Partition number" shows 2. But it should be 3.
third time
placement:
availabilityZone: us-east-2b
region: us-east-2
group:
name: partitionpg
partitionNumber: 8
machine created successfully, check on AWS console "Partition number" shows 3. But the machine should create failed, as the Partition of partitionpg is 7.
I just tested this with cluster bot and the partitionNumber field is working for me with the latest nightly. The partition number in your example is not on the right level, it needs to be on the same level as group, not within the group :) The other validations you've mentioned haven't been implemented yet. Hi Joel,
I'm very sorry for my mistake. Yes, I just changed the partitionNumber to be on the same level as group, and it worked. The other validations are not within the scope of this bug, so move this to Verified.
liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-03-08-191358 True False 9h Cluster version is 4.11.0-0.nightly-2022-03-08-191358
Steps:
1.access to the AWS console to manually create a partition Placement Group with 5 Number of partitions
2. create awsplacementgroup
liuhuali@Lius-MacBook-Pro huali-test % oc get awsplacementgroup partitionpg2 -o yaml
apiVersion: machine.openshift.io/v1
kind: AWSPlacementGroup
metadata:
creationTimestamp: "2022-03-10T05:30:46Z"
generation: 1
name: partitionpg2
namespace: openshift-machine-api
resourceVersion: "113597"
uid: d98ea863-7493-4750-bea2-d56a4ef11492
spec:
credentialsSecret:
name: aws-cloud-credentials
managementSpec:
managed:
groupType: Partition
partition:
count: 5
managementState: Unmanaged
3.reference the awsplacementgroup in the MachineSet.
placement:
availabilityZone: us-east-2b
region: us-east-2
group:
name: partitionpg2
partitionNumber: 3
liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME PHASE TYPE REGION ZONE AGE
huliu-aws411-k2t9g-a2-9ngvp Running m6i.large us-east-2 us-east-2b 11m
4.check on AWS console "Partition number" shows 3.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069 |