Bug 2189431
| Summary: | [FaaS- Migration] After Provider Migration, OSD and MON Volumes attached to provider are empty names- Migration script failed to add correct Volume ID NAME | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | suchita <sgatfane> |
| Component: | odf-managed-service | Assignee: | Ritesh Chikatwar <rchikatw> |
| Status: | VERIFIED --- | QA Contact: | suchita <sgatfane> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.13 | CC: | kramdoss, odf-bz-bot, rchikatw, resoni, sgatfane |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
The script fetches the aws EBS volumes key, using aws ec2 describe-volumes --volume-id $volumeID --filters Name=tag:kubernetes.io/created-for/pvc/namespace,Values=openshift-storage --region $region --query "Volumes[*].Tags" | jq .[] | jq -r '.[]| select (.Value == "owned")|.Key', which is then used to replace the tag for name. if the default output is not set to json in aws configure, it will fail. We added the --output flag to each commands where we fetch the tags. That should solve the issue. PR: https://github.com/rchikatw/odf-managed-service-migration/pull/34 The root cause for this bug is similar to https://bugzilla.redhat.com/show_bug.cgi?id=2189409 Suchita, PR: https://github.com/rchikatw/odf-managed-service-migration/pull/34 is merged please take the latest changes and verify the migration. Migration completed successfully: post migration volumes screenshots and more details are available in doc here[1] [1] https://docs.google.com/document/d/13FeKH1zImP8pMqDDlpapNDkpk3iVmBg2FcRIHLPN7Xc/edit#bookmark=id.nlhp7bd5aewx The Volumes name for OSD and Mon Volumes are updated correctly. Hence marking it as verified. verified with ROSA4.12 and ODF4.12.3-12 Fass Provider migration setup. |
Created attachment 1959740 [details] AWS Console page of Volumes for OSD and Mons Description of problem: After Provider Migration, OSD and MON Volumes attached to provider are empty names- Migration script failed to add correct Volume ID NAME Version-Release number of selected component (if applicable): $ oc get csv -n fusion-storage NAME DISPLAY VERSION REPLACES PHASE managed-fusion-agent.v2.0.11 Managed Fusion Agent 2.0.11 Succeeded observability-operator.v0.0.20 Observability Operator 0.0.20 observability-operator.v0.0.19 Succeeded ocs-operator.v4.13.0-168.stable OpenShift Container Storage 4.13.0-168.stable Succeeded ose-prometheus-operator.4.10.0 Prometheus Operator 4.10.0 Succeeded route-monitor-operator.v0.1.500-6152b76 Route Monitor Operator 0.1.500-6152b76 route-monitor-operator.v0.1.498-e33e391 Pending clusterVersion: NAME VERSION version 4.12.13 How reproducible: 2/2 Steps to Reproduce: 1. Install Appliance mode cluster 2. install FasS agent provider 3. Use migrate.sh script to migrate cluster. ./migrate.sh -provider <oldClusterID> <newClusterID> -d -dev repo checked out after PR#33 merged 4. wait for provider migration completion 5.Check OSD and Mon Volume ID names - CLI or from aws console Actual results: Provider migration script failed to update the Name in volumes of provider OSD and Mon volumes. The Name column is empty - attached AWS Console screenshot Expected results: The Volumes name for OSD and Mon Volumes should be updated correctly. Additional info: Found below error in script of the migration process for the provider: ---------------------------------------------- ... deployment.apps/ocs-operator scaled deployment.apps/ocs-provider-server scaled Storage Provider endpoint: a3c12482a85cd4ec7abcca054045dc99-1658346607.us-east-2.elb.amazonaws.com:50051 Migration of provider is completed! Cluster ID found, getting kubeconfig Update EBS volume tags started Updating tags and storageClass for volume Id vol-02c134c1f2a17f43a parse error: Invalid numeric literal at line 1, column 43 parse error: Invalid numeric literal at line 1, column 43 VOLUMEMODIFICATION modifying 150 False 50 gp2 0 2023-04-25T04:58:26.000Z 3000 False 50 125 gp3 vol-02c134c1f2a17f43a Updating tags and storageClass for volume Id vol-0036e3478ce7dc63d parse error: Invalid numeric literal at line 1, column 5 parse error: Invalid numeric literal at line 1, column 5 VOLUMEMODIFICATION modifying 150 False 50 gp2 0 2023-04-25T04:58:31.000Z 3000 False 50 125 gp3 vol-0036e3478ce7dc63d Updating tags and storageClass for volume Id vol-054b0d3e24f5cde93 parse error: Invalid numeric literal at line 1, column 43 parse error: Invalid numeric literal at line 1, column 43 VOLUMEMODIFICATION modifying 12288 False 4096 gp2 0 2023-04-25T04:58:37.000Z 12000 False 4096 250 gp3 vol-054b0d3e24f5cde93 Updating tags and storageClass for volume Id vol-0340b99770e483fc2 parse error: Invalid numeric literal at line 1, column 34 parse error: Invalid numeric literal at line 1, column 34 VOLUMEMODIFICATION modifying 12288 False 4096 gp2 0 2023-04-25T04:58:43.000Z 12000 False 4096 250 gp3 vol-0340b99770e483fc2 Updating tags and storageClass for volume Id vol-0f335fe2462de68a6 parse error: Invalid numeric literal at line 1, column 40 parse error: Invalid numeric literal at line 1, column 40 VOLUMEMODIFICATION modifying 12288 False 4096 gp2 0 2023-04-25T04:58:49.000Z 12000 False 4096 250 gp3 vol-0f335fe2462de68a6 Updating tags and storageClass for volume Id vol-07cee571d61185aac parse error: Invalid numeric literal at line 1, column 43 parse error: Invalid numeric literal at line 1, column 43 VOLUMEMODIFICATION modifying 150 False 50 gp2 0 2023-04-25T04:58:54.000Z 3000 False 50 125 gp3 vol-07cee571d61185aac Finished Updating EBS volume tags Deleting the old/backup cluster Deletion of Service is started I: Service "2OtmXbK6rf058KWghkjO7lEhsX4" will start uninstalling now Cluster ID found, getting kubeconfig waiting for service to be deleted current state is deleting service storageconsumer.ocs.openshift.io/storageconsumer-2a4ec76d-d72d-4faf-a443-d5a1f6abdc40 patched (no change) storageconsumer.ocs.openshift.io/storageconsumer-6981ffc5-edf1-4170-b519-f0ed4beb3baf patched (no change) storageconsumer.ocs.openshift.io "storageconsumer-2a4ec76d-d72d-4faf-a443-d5a1f6abdc40" deleted storageconsumer.ocs.openshift.io "storageconsumer-6981ffc5-edf1-4170-b519-f0ed4beb3baf" deleted storagesystem.odf.openshift.io/ocs-storagecluster-storagesystem patched storagecluster.ocs.openshift.io/ocs-storagecluster patched Deletion of Old Provider Service cluster is started, the service will be deleted soon. Run the following commands in new terminal, sequentially/parellel to migrate the consumers. ... -------------------------------------------------------------------------------------