Bug 2216038 - Datavolumes take a long time to be created, jwt token expires [NEEDINFO]
Summary: Datavolumes take a long time to be created, jwt token expires
Keywords:
Status: ASSIGNED
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.10.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.14.0
Assignee: Michael Henriksen
QA Contact: Natalie Gavrielov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-06-19 20:54 UTC by Sean Haselden
Modified: 2023-08-04 19:06 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:
yadu: needinfo? (mhenriks)
alitke: needinfo? (mhenriks)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-30083 0 None None None 2023-06-19 20:56:49 UTC

Description Sean Haselden 2023-06-19 20:54:59 UTC
Description of problem:


Frequently when creating virtual machines we see they are stuck on pending forever. During investigation we saw datavolumes that they fail to create and are stuck forever with no status update. When looking at logs in the cdi-deployment pod in openshift-cnv we see logs like this:
```
{"level":"error","ts":<timestamp>,"logger":"controller.datavolume-controller","msg":"Reconciler error", "name":<dv name>,"namespace":<namespace>,"error":"error verifying token: square/go-jose/jwt: validation failed, token is expired (exp)","errorVerbose":"square/go-jose/jwt validation failed, token is expired\nerror verifying token\nkubevirt.io/containerized-data-importer/pkg/controller/.validateCloneTokenDW <snip>"}
```
After some investigation we found that the token in the DV annotations:
metadata:
  annotations:
    cdi.kubevirt.io/storage.clone.token: <jwt>

The JWT is in fact expired, and is only valid for 5 minutes.
The status of the DV is just empty like this:
```
status: {}
```


This doesn't reproduce all the time.

In the initial set of logs it took a little more than 8 minutes from when the DV was created (as can bee seen in the creationTimestamp) until the first log. 2023-06-05T06:44:58Z -> 1685948026.3881886 (2023-06-05T06:53:46Z)
Regarding the time sync issue suggested, I verified there is no difference in the time between different nodes in the cluster, and they are all connected to the same NTP server.
The logs from this time are already attached, including the datavolume yaml.

dv-expire.tar.gz/dv.yaml:  creationTimestamp: "2023-06-05T06:44:58Z"

===
cdi-extended.tar.gz/cdi-deployment.log
===


{"level":"info","ts":1685948026.3881886,"logger":"controller.datavolume-controller","msg":"Initializing transfer","Datavolume":"mongodb/affected-vm-1-rootdisk"}
{"level":"error","ts":1685948026.3953767,"logger":"controller.datavolume-controller","msg":"Reconciler error","name":"affected-vm-1-rootdisk","namespace":"mongodb","error":"error verifying token: square/go-jose/jwt: validation failed, token is expired (exp)","errorVerbose":"square/go-jose/jwt: validation failed, token is expired (exp)\nerror verifying token\nkubevirt.io/containerized-data-importer/pkg/controller.validateCloneTokenDV\n\t/remote-source/app/pkg/controller/util.go:876\nkubevirt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).initTransfer\n\t/remote-source/app/pkg/controller/datavolume-controller.go:1156\nkubevirt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).doCrossNamespaceClone\n\t/remote-source/app/pkg/controller/datavolume-controller.go:896\nkubevirt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).reconcileSmartClonePvc\n\t/remote-source/app/pkg/


Customer then managed to reproduce this issue in a pre prod online environment, by creating a few hundred VMs. A 100 of the VMs have a datavolume configuration that doesn't work - it tries to copy the PVC from a different storage class. The rest of the VMs are completely regular - they should be created normally, but as can be seen there are 292 datavolumes that took more than 5 minutes to be acknowledged and are stuck in limbo since the JWT expired. 

For the attached to the case we have CNV must gather, as well as openshift mustgather from the reproduction.


Note You need to log in before you can comment on or make changes to this bug.