Bug 2176422 - getting wrong error message when trying to upload dv when pvc already exist
Summary: getting wrong error message when trying to upload dv when pvc already exist
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.13.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.13.0
Assignee: Álvaro Romero
QA Contact: dalia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-08 11:03 UTC by dalia
Modified: 2023-09-19 04:34 UTC (History)
2 users (show)

Fixed In Version: v4.13.0.rhel9-1930
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-18 02:58:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 9445 0 None Merged Improve error handling in image-upload when uploading a DV with already existing PVC 2023-03-22 12:48:46 UTC
Github kubevirt kubevirt pull 9492 0 None Merged [release-0.59] Improve error handling in image-upload when uploading a DV with already existing PVC 2023-04-03 03:03:34 UTC
Red Hat Issue Tracker CNV-26659 0 None None None 2023-03-08 11:05:08 UTC
Red Hat Product Errata RHSA-2023:3205 0 None None None 2023-05-18 02:58:25 UTC

Description dalia 2023-03-08 11:03:42 UTC
Description of problem:
When trying to upload a dv/pvc via virtctl, and the pvc already exist but the dv was garbage collected, the wrong error message printed to screen.

Version-Release number of selected component (if applicable):
4.13

How reproducible:
100%

Steps to Reproduce:
1. Create a DV and wait for it to success:

virtctl image-upload dv dv-name --size=18Gi --image-path=./Fedora... --block-volume --access-mode=ReadWriteOnce --storage-class=ocs-storagecluster-ceph-rbd --insecure

2. try to upload again:


Actual results:
error message:
No DataVolume is associated with the existing PVC default/dv-name


Expected results:
error message:
PVC dv-name already successfully imported/cloned/updated

Comment 1 Álvaro Romero 2023-03-16 13:07:20 UTC
Hey @dafrank 

I created this PR[1] to improve the error handling so it matches the expected results, but I think we first need some reviews to see if the team is happy with the change.

I think we can't simply change the current error message to match the expected result since we don't know if the PVC exists due to an already garbage-collected DV or if it was just created with the same name.

In the PR, I'm checking if the PVC contains the DeleteAfterCompletion annotation to determine if it was created by a DV or not. However, I'm also not 100% happy with that because the PVC could have been created by a completely different DV with the same name. I still think it's a good solution though. 

[1] https://github.com/kubevirt/kubevirt/pull/9445

Comment 2 Yan Du 2023-03-22 13:26:33 UTC
Alvaro, could you please cherrypick the main branch PR?

Comment 3 Álvaro Romero 2023-03-22 13:36:16 UTC
Sure, thanks for noticing.

Comment 4 Yan Du 2023-04-12 10:20:05 UTC
Test on CNV-v4.13.0.rhel9-2036

$ virtctl image-upload dv dv-name --size=18Gi --image-path=./Fedora...  --block-volume --access-mode=ReadWriteOnce --storage-class=ocs-storagecluster-ceph-rbd --insecure
DataVolume already garbage-collected: Assuming PVC default/dv-name is successfully populated

Comment 6 errata-xmlrpc 2023-05-18 02:58:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.13.0 Images security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3205

Comment 7 Red Hat Bugzilla 2023-09-19 04:34:21 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.