Bug 2124207 - Snapshot is not getting into ready status and topolvm-node is in crashloopback
Summary: Snapshot is not getting into ready status and topolvm-node is in crashloopback
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: topolvm
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.12.0
Assignee: N Balachandran
QA Contact: Shay Rozen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-05 09:19 UTC by Shay Rozen
Modified: 2023-08-09 17:03 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-01-31 00:19:40 UTC
Embargoed:


Attachments (Terms of Use)
must gather. (247.32 KB, application/gzip)
2022-09-05 09:19 UTC, Shay Rozen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2023:0551 0 None None None 2023-01-31 00:19:59 UTC

Description Shay Rozen 2022-09-05 09:19:26 UTC
Created attachment 1909597 [details]
must gather.

Description of problem (please be detailed as possible and provide log
snippests):
Running snapshot regression on 4.12 snapshot are failing to get to ready status and in the resource delete topolvm-node gets into crash loopback.


Version of all relevant components (if applicable):
odf 4.12.0-20

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Can create snapshot and after topolvm-node crash can't work with the product

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
2/2

Can this issue reproduce from the UI?
NR

If this is a regression, please provide more details to justify this:
Yes

Steps to Reproduce:
1. Create PVC and POD
2. Create snapshot



Actual results:
Snapshot is not ready to use
Status:
  Bound Volume Snapshot Content Name:  snapcontent-66c974a2-5d6c-4541-892b-305fa3d1cd63
  Error:
    Message:     Failed to check and update snapshot content: failed to take snapshot of the volume cc6b87b6-8403-4149-af4e-e899f386d2ee: "rpc error: code = Internal desc = strconv.ParseUint: parsing \"-1\": invalid syntax"
    Time:        2022-09-05T09:03:47Z
  Ready To Use:  false

And while deleting resource topolvm-node crash

Expected results:
Snapshot should be ready to use and topolvm-node shouldn't crash when deleting the resource.

Additional info:
Name:         pvc-test-b07e77749c174a4fa285b86ba304227-snapshot-2e586e9e29b2400b958adcbaebcc01af
Namespace:    namespace-test-ede710c4797f42818ecc8c0dc
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1
Kind:         VolumeSnapshot
Metadata:
  Creation Timestamp:  2022-09-05T09:03:46Z
  Finalizers:
    snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
    snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
  Generation:  1
  Managed Fields:
    API Version:  snapshot.storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection":
          v:"snapshot.storage.kubernetes.io/volumesnapshot-bound-protection":
    Manager:      Go-http-client
    Operation:    Update
    Time:         2022-09-05T09:03:46Z
    API Version:  snapshot.storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:source:
          .:
          f:persistentVolumeClaimName:
        f:volumeSnapshotClassName:
    Manager:      kubectl-create
    Operation:    Update
    Time:         2022-09-05T09:03:46Z
    API Version:  snapshot.storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:boundVolumeSnapshotContentName:
        f:error:
          .:
          f:message:
          f:time:
        f:readyToUse:
    Manager:         Go-http-client
    Operation:       Update
    Subresource:     status
    Time:            2022-09-05T09:03:47Z
  Resource Version:  12430243
  UID:               66c974a2-5d6c-4541-892b-305fa3d1cd63
Spec:
  Source:
    Persistent Volume Claim Name:  pvc-test-b07e77749c174a4fa285b86ba304227
  Volume Snapshot Class Name:      odf-lvm-vg1
Status:
  Bound Volume Snapshot Content Name:  snapcontent-66c974a2-5d6c-4541-892b-305fa3d1cd63
  Error:
    Message:     Failed to check and update snapshot content: failed to take snapshot of the volume cc6b87b6-8403-4149-af4e-e899f386d2ee: "rpc error: code = Internal desc = strconv.ParseUint: parsing \"-1\": invalid syntax"
    Time:        2022-09-05T09:03:47Z
  Ready To Use:  false
Events:
  Type    Reason            Age   From                 Message
  ----    ------            ----  ----                 -------
  Normal  CreatingSnapshot  110s  snapshot-controller  Waiting for a snapshot namespace-test-ede710c4797f42818ecc8c0dc/pvc-test-b07e77749c174a4fa285b86ba304227-snapshot-2e586e9e29b2400b958adcbaebcc01af to be created by the CSI driver.

12:05:36 - MainThread - ocs_ci.ocs.ocp - ERROR  - Wait for VolumeSnapshot resource pvc-test-b07e77749c174a4fa285b86ba304227-snapshot-2e586e9e29b2400b958adcbaebcc01af at column READYTOUSE to reach desired condition true failed, last actual status was false

Comment 6 Shay Rozen 2022-09-13 13:39:53 UTC
Hi Nithya. Can you paste the link to the PR?

Comment 11 Shay Rozen 2022-09-21 11:07:35 UTC
Hi Nithya. Shouldn't it be on_qa now? The fix was merged 8 days ago.

Comment 23 errata-xmlrpc 2023-01-31 00:19:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.12.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:0551


Note You need to log in before you can comment on or make changes to this bug.