Bug 1897351 - [Tracking bug 1910288] Capacity limit usability concerns
Summary: [Tracking bug 1910288] Capacity limit usability concerns
Keywords:
Status: CLOSED COMPLETED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.5
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Scott Ostapovicz
QA Contact: Elad
URL:
Whiteboard:
: 1893528 (view as bug list)
Depends On: 1910288 1910272 1910289
Blocks: 1810525
TreeView+ depends on / blocked
 
Reported: 2020-11-12 20:48 UTC by Jenifer Abrams
Modified: 2023-08-09 16:37 UTC (History)
32 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-24 06:29:33 UTC
Embargoed:


Attachments (Terms of Use)
must_gather for comment#28 (11.52 MB, application/gzip)
2021-01-08 13:41 UTC, krishnaram Karthick
no flags Details

Description Jenifer Abrams 2020-11-12 20:48:44 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Our team has significant concerns about OCS usability regarding the 85% capacity limit that could easily lead to critical customer situations. Our concerns are twofold: 1) the ability (or lack thereof) to recover from read-only mode, and 2) how a cluster administrator would keep track of available storage to prevent hitting this limit.

To our knowledge, once this 85% limit is reached, it is extremely difficult to recover the OCS cluster since deleting PVCs requires a write operation which would be blocked (as per: https://bugzilla.redhat.com/show_bug.cgi?id=1867593#c13). Note in this case we are not talking about etcd or any cluster components using OCS, just provisioning OCS PVCs for normal workloads (pods/VMs). This is further exacerbated by the fact that uneven allocation may occur leading to a small subset of OSDs getting filled much earlier than the rest which would end up triggering the 85% limit even though the storage cluster still has much more storage available, as described in Bug1896959. 
We have unsuccessfully attempted to recover a cluster after hitting the limit by: 1) adding more storage -- does not solve existing provisioned PVCs on full OSD(s), 2) deleting PVCs and manually removing finalizers -- unresponsive due to read-only lock, 3) deleting PVCs from the UI -- no longer shown in UI but pods were still running and cannot be killed. 

The second concern is how an administrator would keep track of available capacity. The Ceph Alerts are based on *usage*, however OCS will easily overprovision PVCs with *requested* capacity well beyond this limit. While there are some proposals to provide thick provisioning options, the default thinp mode leads to a situation where PVC usage can grow over time up to the requested capacity.  We need some API/mechanism that keeps track of total *requests* and potentially also factor in the capacity of the most-filled OSD. Why in any scenario would it be a good idea to provision thinp PVCs with total requests higher than the OCS limit, without at least an option to easily recover (i.e. PVC deletions) once a limit is reached? 

Related bugs/logs:
BZ 1896959 - PG autoscalar did not respond to storage pool consuming space
BZ 1867593 - Clone of large DV remains stuck at a certain point in progress


Version of all relevant components (if applicable):
$ oc -n openshift-storage get csv
NAME                  DISPLAY                       VERSION   REPLACES   PHASE
ocs-operator.v4.5.2   OpenShift Container Storage   4.5.2                Succeeded


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
This has caused delays in performance work for a critical CNV/OCS customer engagement where we are testing deployment storms, because the cluster is nearly impossible to recover once the limit was reached on an OSD. 

Is there any workaround available to the best of your knowledge?
It appears an administrator would need to manually keep track of total PVC requests, however there is still a problem of uneven distribution meaning an admin would need to get into the low level ceph tools to view per OSD usage, and potentially have to kill/stop workloads that happened to grow too much on a too-full OSD?

Looking at the OCP console, I don’t see a view of per OSD usage/capacity, or an easy way to get a total of PVC request/capacity sizes. 

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1 - very simple to hit this issue

Can this issue reproducible?
Yes, provision PVCs with total requests beyond the ceph limit and grow the storage space per thinp PVC.

Can this issue reproduce from the UI?
Any way thinp PVCs are provisioned where requested storage is beyond limits there is a risk of hitting this. 

If this is a regression, please provide more details to justify this:
Not a regression

Steps to Reproduce:
1. Provision PVCs where total request capacity is beyond the limit
2. Start filling the storage for each PVC
Example testcase:
Create many VMs using DataVolumes of size 40Gi each, well beyond the storage limit: http://perf148h.perf.lab.eng.bos.redhat.com/share/storm_rhel_template_blk.yaml
The DataVolume clones will all hang during CloneInProgress as the thinp PVCs start to grow.

Actual results:
Cluster became read-only and had to be reinstalled. 

Expected results:
Users can clearly tell *before* thinp PVCs grow how much capacity is available in terms of current requests, and users have a self-service recovery method once the 85% limit is reached, i.e. deleting PVCs or by adding more physical storage to the cluster.

Comment 5 Dan Kenigsberg 2020-11-12 21:42:35 UTC
Our intended customer has a fixed-sized cluster running on a single rack, exposing all available local storage in advance. If they run into this situation, I don't think that adding physical capacity is an option - they would not want to add 3 out-of-rack nodes to revive their cluster. They would prefer to delete a couple of PVs and restore sanity.

Comment 7 Scott Ostapovicz 2020-11-16 13:26:14 UTC
Need to confirm if this is related to PG updates.  If so then this may be a regression. See ticket https://bugzilla.redhat.com/show_bug.cgi?id=1782756 which is a duplicate of bug 1797918.

Comment 15 Natalie Gavrielov 2020-11-25 13:56:54 UTC
*** Bug 1893528 has been marked as a duplicate of this bug. ***

Comment 16 Chris Blum 2020-11-30 13:03:56 UTC
So the two concerns were:

1) Lack of self-service recovery of a read-only (Ceph FULL_OSD) scenario

2) Lack of tracking the cluster used-capacity and preventing reaching the full ratio in the Ceph cluster

Solutions:

1) For a self-service restore, one can follow this procedure: https://access.redhat.com/solutions/3001761 but we advice people to contact support, since this will need Toolbox interaction

2) Before the cluster reaches the full ratio, it reaches the nearfull ratio - which is a warning sign that the cluster admin should take action.
   To watch the cluster fill state one can use the OCS Dashboard which will provide the overall available capacity. The nearfull and full ratios are calculated one a OSD basis, thus to watch this for each particular OSD, one can use the Monitoring -> Metrics page with the query "ceph_osd_stat_bytes_used/ceph_osd_stat_bytes" - that will yield the percentage of used capacity for every single OSD of the Ceph cluster.

As next step I can see that we should make the recovery of full ratio situations easier and document how one can watch a per-OSD capacity distribution - do we all agree?

Comment 17 Yaniv Kaul 2020-11-30 13:21:06 UTC
Additional items to consider (when reducing the severity):
1. This is mostly applicable to CNV - while it could happen to any OCP workloads, most of what we've seen thus far are short lived PVs (and generally small). Moving to thick provisioning should not only help with performance a bit, but also improve this situation a bit - as PVs won't continuously expand.
2. Compression - just a band aid, but will make it less likely (again, for some workloads) to hit this case.

Comment 18 Fabian Deutsch 2020-11-30 13:39:28 UTC
Chris, does a prom alert exist for the nearfull and full ratio?
If not, let's please create it, once it does: Can we label it to show how systemc ritical this alert is? And integrate it also nicely on the ui.

Comment 20 Jenifer Abrams 2020-11-30 17:06:07 UTC
(In reply to Chris Blum from comment #16)
> So the two concerns were:
> 
> 1) Lack of self-service recovery of a read-only (Ceph FULL_OSD) scenario
> 
> 2) Lack of tracking the cluster used-capacity and preventing reaching the
> full ratio in the Ceph cluster
> 
> Solutions:
> 
> 1) For a self-service restore, one can follow this procedure:
> https://access.redhat.com/solutions/3001761 but we advice people to contact
> support, since this will need Toolbox interaction
> 
> 2) Before the cluster reaches the full ratio, it reaches the nearfull ratio
> - which is a warning sign that the cluster admin should take action.
>    To watch the cluster fill state one can use the OCS Dashboard which will
> provide the overall available capacity. The nearfull and full ratios are
> calculated one a OSD basis, thus to watch this for each particular OSD, one
> can use the Monitoring -> Metrics page with the query
> "ceph_osd_stat_bytes_used/ceph_osd_stat_bytes" - that will yield the
> percentage of used capacity for every single OSD of the Ceph cluster.
> 
> As next step I can see that we should make the recovery of full ratio
> situations easier and document how one can watch a per-OSD capacity
> distribution - do we all agree?

For 2), I agree a dashboard or documented metric view of per-OSD capacity distribution would help.  But still a larger concern is the lack of accounting for total *requested* capacity (not just used capacity).

Comment 21 Jenifer Abrams 2020-11-30 17:08:51 UTC
(In reply to Yaniv Kaul from comment #17)
> Additional items to consider (when reducing the severity):
> 1. This is mostly applicable to CNV - while it could happen to any OCP
> workloads, most of what we've seen thus far are short lived PVs (and
> generally small). Moving to thick provisioning should not only help with
> performance a bit, but also improve this situation a bit - as PVs won't
> continuously expand.

Agreed CNV does seem the more likely use case to hit these types of problems since VMs tend to be larger and longer-lived. A thickp option would be very nice, although would still be concerned about default thinp behavior. 

> 2. Compression - just a band aid, but will make it less likely (again, for
> some workloads) to hit this case.

Can you expand on this a bit?

Comment 22 Scott Ostapovicz 2020-12-01 09:51:05 UTC
As noted in comment 12, this is working as designed.  There is already an RHCS ticket tracking proper reporting when capacity reaches 75% that ticket being https://bugzilla.redhat.com/show_bug.cgi?id=1848798. As such I a going to make this ticket a tracker of that ticket.  And hopefully the ongoing design discussion will be captured and continued elsewhere.

Comment 24 Yaniv Kaul 2020-12-03 14:47:05 UTC
Michael - I think we need an additional BZ here - when we are entering readonly, the user cannot delete PVs to free up some space. We need to support this.

Comment 25 Orit Wasserman 2020-12-03 15:11:46 UTC
After discussions with the Josh and Jason. 
There is a flag that allow the clients to delete data when in full state:
The flag is "CEPH_OSD_FLAG_FULL_TRY", the OSD can still prevent the ops if the pool is flagged as full or if
you are passed the failsafe ratio (97%).
For RBD, we would also need to add "FULL_TRY" flag in the MGR's "rbd_support" tasks for image deletion.
We will need a BZ for the MGR change and a BZ for the CSI change.

Comment 27 Orit Wasserman 2020-12-23 14:40:32 UTC
(In reply to Orit Wasserman from comment #25)
> After discussions with the Josh and Jason. 
> There is a flag that allow the clients to delete data when in full state:
> The flag is "CEPH_OSD_FLAG_FULL_TRY", the OSD can still prevent the ops if
> the pool is flagged as full or if
> you are passed the failsafe ratio (97%).
> For RBD, we would also need to add "FULL_TRY" flag in the MGR's
> "rbd_support" tasks for image deletion.
> We will need a BZ for the MGR change and a BZ for the CSI change.

Related BZs:
https://bugzilla.redhat.com/show_bug.cgi?id=1810525
https://bugzilla.redhat.com/show_bug.cgi?id=1910272
https://bugzilla.redhat.com/show_bug.cgi?id=1910288
https://bugzilla.redhat.com/show_bug.cgi?id=1910289

Comment 29 krishnaram Karthick 2021-01-08 13:41:09 UTC
Created attachment 1745588 [details]
must_gather for comment#28

Comment 31 Ben England 2021-01-11 16:28:07 UTC
customer 0 is hitting this problem again, raising priority to high.

Comment 35 krishnaram Karthick 2021-01-29 16:42:24 UTC
This time I reran the test with these steps

1) Filled up the storage so full ratio is hit
2) Added OSDs - Although OSDs were added, rebalance had not started. I waited for ~30 minutes
3) Changed full ratio from 0.85 to 0.9
4) Rebalance started and proceeded, but it has been very slow. 

So, the full ratio had to be changed so rebalance can happen. And, since the recovery is slow, it is highly likely that if new apps write data full ratio might be hit again?

AFTER STEP:1
=============
sh-4.4#  ceph -s
  cluster:
    id:     7f90c148-e217-4ac7-a284-ee8cc76e9026
    health: HEALTH_ERR
            3 full osd(s)
            3 pool(s) full
 
  services:
    mon: 3 daemons, quorum a,b,c (age 77m)
    mgr: a(active, since 77m)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
    osd: 3 osds: 3 up (since 76m), 3 in (since 76m)
 
  task status:
    scrub status:
        mds.ocs-storagecluster-cephfilesystem-a: idle
        mds.ocs-storagecluster-cephfilesystem-b: idle
 
  data:
    pools:   3 pools, 192 pgs
    objects: 112.20k objects, 437 GiB
    usage:   1.3 TiB used, 225 GiB / 1.5 TiB avail
    pgs:     192 active+clean
 
  io:
    client:   1.2 KiB/s rd, 2 op/s rd, 0 op/s wr


AFTER STEP 2
============
ceph -s
  cluster:
    id:     7f90c148-e217-4ac7-a284-ee8cc76e9026
    health: HEALTH_ERR
            1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set
            3 full osd(s)
            3 pool(s) full
            Degraded data redundancy: 140398/336615 objects degraded (41.709%), 124 pgs degraded, 112 pgs undersized
            Full OSDs blocking recovery: 126 pgs recovery_toofull
 
  services:
    mon: 3 daemons, quorum a,b,c (age 83m)
    mgr: a(active, since 83m)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
    osd: 6 osds: 6 up (since 4m), 6 in (since 4m); 114 remapped pgs
 
  task status:
    scrub status:
        mds.ocs-storagecluster-cephfilesystem-a: idle
        mds.ocs-storagecluster-cephfilesystem-b: idle
 
  data:
    pools:   3 pools, 192 pgs
    objects: 112.20k objects, 437 GiB
    usage:   1.3 TiB used, 1.7 TiB / 3 TiB avail
    pgs:     140398/336615 objects degraded (41.709%)
             30431/336615 objects misplaced (9.040%)
             112 active+recovery_toofull+undersized+degraded+remapped
             66  active+clean
             12  active+recovery_toofull+degraded
             2   active+recovery_toofull+remapped
 
  io:
    client:   1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
 
AFTER STEP 3
=============

[kramdoss@kramdoss ~]$ oc rsh -n openshift-storage rook-ceph-tools-b87566df7-cv59m
sh-4.4# 
sh-4.4# 
sh-4.4# ceph -s
  cluster:
    id:     7f90c148-e217-4ac7-a284-ee8cc76e9026
    health: HEALTH_ERR
            1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set
            3 full osd(s)
            3 pool(s) full
            Degraded data redundancy: 140398/336615 objects degraded (41.709%), 124 pgs degraded, 112 pgs undersized
            Full OSDs blocking recovery: 126 pgs recovery_toofull
 
  services:
    mon: 3 daemons, quorum a,b,c (age 102m)
    mgr: a(active, since 102m)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
    osd: 6 osds: 6 up (since 23m), 6 in (since 23m); 114 remapped pgs
 
  task status:
    scrub status:
        mds.ocs-storagecluster-cephfilesystem-a: idle
        mds.ocs-storagecluster-cephfilesystem-b: idle
 
  data:
    pools:   3 pools, 192 pgs
    objects: 112.20k objects, 437 GiB
    usage:   1.3 TiB used, 1.7 TiB / 3 TiB avail
    pgs:     140398/336615 objects degraded (41.709%)
             30431/336615 objects misplaced (9.040%)
             112 active+recovery_toofull+undersized+degraded+remapped
             66  active+clean
             12  active+recovery_toofull+degraded
             2   active+recovery_toofull+remapped
 
  io:
    client:   852 B/s rd, 1 op/s rd, 0 op/s wr

STEP 4:
========

ceph -s
  cluster:
    id:     7f90c148-e217-4ac7-a284-ee8cc76e9026
    health: HEALTH_WARN
            1 backfillfull osd(s)
            1 nearfull osd(s)
            3 pool(s) backfillfull
            Degraded data redundancy: 67902/453807 objects degraded (14.963%), 55 pgs degraded, 55 pgs undersized
 
  services:
    mon: 3 daemons, quorum a,b,c (age 3h)
    mgr: a(active, since 3h)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
    osd: 6 osds: 6 up (since 2h), 6 in (since 2h); 57 remapped pgs
 
  task status:
    scrub status:
        mds.ocs-storagecluster-cephfilesystem-a: idle
        mds.ocs-storagecluster-cephfilesystem-b: idle
 
  data:
    pools:   3 pools, 192 pgs
    objects: 151.27k objects, 588 GiB
    usage:   1.7 TiB used, 1.3 TiB / 3 TiB avail
    pgs:     67902/453807 objects degraded (14.963%)
             14024/453807 objects misplaced (3.090%)
             135 active+clean
             54  active+recovery_wait+undersized+degraded+remapped
             2   active+recovery_wait+remapped
             1   active+recovering+undersized+degraded+remapped
 
  io:
    client:   852 B/s rd, 149 KiB/s wr, 1 op/s rd, 2 op/s wr
    recovery: 41 MiB/s, 10 objects/s

Comment 40 Travis Nielsen 2021-02-10 22:04:53 UTC
Is this a fair summary?

1. There are already UI/prometheus alerts when the cluster is filling or critical alerts when full.
2. When the cluster is full, there is a documented workaround to get the cluster working again, including allowing the cluster to be expanded or PVs to be deleted. However, this is a tedious/advanced workaround.
3. Ceph needs to allow deleting PVs even when the cluster is full. 
4. Thick provisioning is (currently) only possible through filling up the volume (writing 0s) at PV creation time. This is a very slow operation. Ceph is really only designed for thin provisioning.
5. Compression will help delay the time to fill a cluster for some workloads.

Long term, 3 is the obvious fix for any cluster that fills up, but this fix is at least not until RHCS 5.1 (OCS 4.9). 

In the short term, reducing the pain of the workaround in 2 is very feasible. The following are tracking this improvement:
https://issues.redhat.com/browse/RHSTOR-1621
https://docs.google.com/document/d/13gtxf2qPPZK2Ndt84lTqb_JjUnTIub8Vj3w4qop26zA/edit

Thick provisioning has at least two approaches:
- The CSI driver would fill up the volume upon creation. 
- The application would fill up its own volume.
Neither of these seem like reasonable approach. They will reserve the space, but are very slow to initialize. Is there another solution for thick provisioning that is more realistic?

Comment 42 Scott Ostapovicz 2021-02-16 15:51:03 UTC
The ticket is intended to be a tracker for the larger issue, and as noted in the ticket, it is working as designed.  Thick provisioning is being suggested as a temporary solution to this, but that thick provisioning would be a CSI level thing, NOT a Ceph level thing.  

Note: There is a RADOS level Ceph version of “thick provisioning” being proposed for Ceph 5.1  https://bugzilla.redhat.com/show_bug.cgi?id=1924129 , but this may not even make the RHCS 5.0 release (thus OCS 4.9 or later).  

A real solution is also being proposed for 5.1 https://bugzilla.redhat.com/show_bug.cgi?id=1910288 , where the Ceph manager would always service delete requests.  That would also be OCS 4.9 or later.  This is the ticket that should be tracked here.

Comment 43 Jenifer Abrams 2021-03-02 15:41:49 UTC
Looks like the deletion/recovery concerns are well covered by the bugs linked in this tracker and a thickp option would allow some users to ensure usage does not exceed requested capacity.  

However when using the default thinp behavior, is there a good way to track the total size of PVC requests? (not actual usage, which is already covered in the dashboard/alerting) That way admins will know if the storage is overcommitted or not and could make decisions about the potential to grow usage to "full" state later.

Comment 44 Niels de Vos 2021-03-02 17:14:48 UTC
(In reply to Jenifer Abrams from comment #43)
> However when using the default thinp behavior, is there a good way to track
> the total size of PVC requests? (not actual usage, which is already covered
> in the dashboard/alerting) That way admins will know if the storage is
> overcommitted or not and could make decisions about the potential to grow
> usage to "full" state later.

Both the PersistentVolumeClaims (in a namespace) and PersistentVolumes (global, not namespaced) have details about the capacity that was requested. Enumerating all PVs provisioned by a StorageClass, could give you the information you are looking for. It might be non-trivial to identify what StorageClasses use a certain Ceph cluster though.

Comment 46 Kevin Alon Goldblatt 2021-07-25 12:17:31 UTC
We have encountered this issue again on Openshif  Virtualization with the following version:
--------------------------------------------
oc version
Client Version: 4.9.0-202107201418.p0.git.dc6ae72.assembly.stream-dc6ae72
Server Version: 4.9.0-0.nightly-2021-07-20-125820
Kubernetes Version: v1.21.1+8268f88

oc get csv --all-namespaces
NAMESPACE                              NAME                                        DISPLAY                       VERSION              REPLACES                                  PHASE
openshift-cnv                          kubevirt-hyperconverged-operator.v4.9.0     OpenShift Virtualization      4.9.0                kubevirt-hyperconverged-operator.v2.6.5   Succeeded
openshift-local-storage                local-storage-operator.4.8.0-202106291913   Local Storage                 4.8.0-202106291913                                             Succeeded
openshift-operator-lifecycle-manager   packageserver                               Package Server                0.18.3                                                         Succeeded
openshift-storage                      ocs-operator.v4.9.0-441.ci                  OpenShift Container Storage   4.9.0-441.ci                                                   Succeeded

Scenario:
-----------
1. Imported a RHEV Rel8 image of 100G to our env
2. ImportInProgress reports 100% but hangs

oc get dv
NAME                                             PHASE              PROGRESS   RESTARTS   AGE
v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241   ImportInProgress   100.00%               10m

oc get pvc
NAME                                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241   Bound    pvc-64be9df7-d897-4eca-bcef-8ec0a9879ce8   106Gi      RWO            ocs-storagecluster-ceph-rbd   10m



3. Deleted the VM >>> The DV is deleted but the PVC remains bound and Terminating, PV status is bound. 



oc get pvc -A
NAMESPACE                            NAME                                                        STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
default                              v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241              Terminating   pvc-64be9df7-d897-4eca-bcef-8ec0a9879ce8   106Gi      RWO            ocs-storagecluster-ceph-rbd   2d20h


oc get pv -A
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                                  STORAGECLASS                  REASON   AGE
pvc-64be9df7-d897-4eca-bcef-8ec0a9879ce8   106Gi      RWO            Delete           Bound       default/v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241                 ocs-storagecluster-ceph-rbd            2d20h

 


RookCephTools Reports:
----------------------------
oc rsh -n openshift-storage $TOOLS_POD
sh-4.4# ceph status
  cluster:
    id:     b122a986-c71a-4379-b917-d913a10c1868
    health: HEALTH_ERR
            3 full osd(s)
            10 pool(s) full
            Degraded data redundancy: 1/51684 objects degraded (0.002%), 1 pg degraded
            Full OSDs blocking recovery: 1 pg recovery_toofull
 
  services:
    mon: 3 daemons, quorum a,b,c (age 11h)
    mgr: a(active, since 11h)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-b=up:active} 1 up:standby-replay
    osd: 3 osds: 3 up (since 11h), 3 in (since 4d)
 
  data:
    pools:   10 pools, 272 pgs
    objects: 17.23k objects, 59 GiB
    usage:   179 GiB used, 31 GiB / 210 GiB avail
    pgs:     1/51684 objects degraded (0.002%)
             271 active+clean
             1   active+recovery_toofull+degraded
 
  io:
    client:   851 B/s rd, 1 op/s rd, 0 op/s wr
 



 oc logs importer-v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241
I0722 15:39:55.081184       1 importer.go:52] Starting importer
I0722 15:39:55.082561       1 importer.go:135] begin import process
I0722 15:39:56.029164       1 http-datasource.go:249] Attempting to get certs from /certs/ca.pem
I0722 15:39:56.069509       1 data-processor.go:323] Calculating available size
I0722 15:39:56.070999       1 data-processor.go:331] Checking out block volume size.
I0722 15:39:56.071018       1 data-processor.go:343] Request image size not empty.
I0722 15:39:56.071035       1 data-processor.go:348] Target size 113623473664.
I0722 15:39:56.071171       1 data-processor.go:231] New phase: TransferDataFile
I0722 15:39:56.072059       1 util.go:168] Writing data...
I0722 15:39:57.071510       1 prometheus.go:69] 15.70
I0722 15:39:58.072246       1 prometheus.go:69] 25.13
I0722 15:39:59.072529       1 prometheus.go:69] 32.21
I0722 15:40:00.078121       1 prometheus.go:69] 40.17
I0722 15:40:01.078237       1 prometheus.go:69] 46.22
I0722 15:40:02.078453       1 prometheus.go:69] 53.15
I0722 15:40:03.078530       1 prometheus.go:69] 59.91
I0722 15:40:04.078642       1 prometheus.go:69] 67.21
I0722 15:40:05.078845       1 prometheus.go:69] 74.44
I0722 15:40:06.078953       1 prometheus.go:69] 82.59
I0722 15:40:07.079416       1 prometheus.go:69] 88.97
I0722 15:40:08.079841       1 prometheus.go:69] 95.18
I0722 15:40:09.080380       1 prometheus.go:69] 100.00
I0722 15:40:10.081233       1 prometheus.go:69] 100.00
I0722 15:40:11.081327       1 prometheus.go:69] 100.00
I0722 15:40:12.081595       1 prometheus.go:69] 100.00
I0722 15:40:13.081787       1 prometheus.go:69] 100.00
I0722 15:40:14.081908       1 prometheus.go:69] 100.00
I0722 15:40:15.082040       1 prometheus.go:69] 100.00
I0722 15:40:16.082940       1 prometheus.go:69] 100.00
I0722 15:40:17.083196       1 prometheus.go:69] 100.00
I0722 15:40:18.083335       1 prometheus.go:69] 100.00
I0722 15:40:19.083520       1 prometheus.go:69] 100.00
I0722 15:40:20.083699       1 prometheus.go:69] 100.00
I0722 15:40:21.083904       1 prometheus.go:69] 100.00
I0722 15:40:22.084076       1 prometheus.go:69] 100.00
I0722 15:40:23.084224       1 prometheus.go:69] 100.00
I0722 15:40:24.084468       1 prometheus.go:69] 100.00
I0722 15:40:25.084634       1 prometheus.go:69] 100.00
I0722 15:40:26.085140       1 prometheus.go:69] 100.00
I0722 15:40:27.085891       1 prometheus.go:69] 100.00
I0722 15:40:28.086887       1 prometheus.go:69] 100.00
I0722 15:40:29.086966       1 prometheus.go:69] 100.00
I0722 15:40:30.087077       1 prometheus.go:69] 100.00
I0722 15:40:31.087875       1 prometheus.go:69] 100.00
I0722 15:40:32.088044       1 prometheus.go:69] 100.00
I0722 15:40:33.093891       1 prometheus.go:69] 100.00
I0722 15:40:34.288662       1 prometheus.go:69] 100.00
I0722 15:40:35.288880       1 prometheus.go:69] 100.00
I0722 15:40:36.289004       1 prometheus.go:69] 100.00
I0722 15:40:37.289443       1 prometheus.go:69] 100.00
I0722 15:40:38.289603       1 prometheus.go:69] 100.00
I0722 15:40:39.289763       1 prometheus.go:69] 100.00
I0722 15:40:40.290582       1 prometheus.go:69] 100.00 oc get dv
NAME                                             PHASE              PROGRESS   RESTARTS   AGE
leon-four-nine-disk-0-mnjnd                      Succeeded          100.0%                7h2m
leon-four-nine-disk-1-x02q7                      Succeeded          100.0%                7h2m
v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241   ImportInProgress   100.00%               10m
[cnv-qe-jenkins@stg07-ying-kp8wh-executor ~]$  oc get pvc

I0722 15:40:41.290951       1 prometheus.go:69] 100.00
I0722 15:40:42.291221       1 prometheus.go:69] 100.00
I0722 15:40:43.292013       1 prometheus.go:69] 100.00
I0722 15:40:44.292804       1 prometheus.go:69] 100.00
I0722 15:40:45.292957       1 prometheus.go:69] 100.00
I0722 15:40:46.293234       1 prometheus.go:69] 100.00
I0722 15:40:47.295185       1 prometheus.go:69] 100.00
I0722 15:40:48.295545       1 prometheus.go:69] 100.00
I0722 15:40:49.301116       1 prometheus.go:69] 100.00
I0722 15:40:50.301271       1 prometheus.go:69] 100.00
I0722 15:40:51.301542       1 prometheus.go:69] 100.00
I0722 15:40:52.301704       1 prometheus.go:69] 100.00
I0722 15:40:53.301799       1 prometheus.go:69] 100.00
I0722 15:40:54.301937       1 prometheus.go:69] 100.00
I0722 15:40:55.302155       1 prometheus.go:69] 100.00
I0722 15:40:56.302310       1 prometheus.go:69] 100.00
I0722 15:40:57.313001       1 prometheus.go:69] 100.00
I0722 15:40:58.313154       1 prometheus.go:69] 100.00
I0722 15:40:59.313311       1 prometheus.go:69] 100.00
I0722 15:41:00.317084       1 prometheus.go:69] 100.00
I0722 15:41:01.317403       1 prometheus.go:69] 100.00
I0722 15:41:02.317992       1 prometheus.go:69] 100.00
I0722 15:41:03.318133       1 prometheus.go:69] 100.00
I0722 15:41:04.319363       1 prometheus.go:69] 100.00
I0722 15:41:05.319581       1 prometheus.go:69] 100.00
I0722 15:41:06.319910       1 prometheus.go:69] 100.00
I0722 15:41:07.320053       1 prometheus.go:69] 100.00
I0722 15:41:08.320261       1 prometheus.go:69] 100.00
I0722 15:41:09.320420       1 prometheus.go:69] 100.00
I0722 15:41:10.323127       1 prometheus.go:69] 100.00
I0722 15:41:11.323603       1 prometheus.go:69] 100.00 oc get dv
NAME                                             PHASE              PROGRESS   RESTARTS   AGE
leon-four-nine-disk-0-mnjnd                      Succeeded          100.0%                7h2m
leon-four-nine-disk-1-x02q7                      Succeeded          100.0%                7h2m
v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241   ImportInProgress   100.00%               10m
[cnv-qe-jenkins@stg07-ying-kp8wh-executor ~]$  oc get pvc

I0722 15:41:12.920201       1 prometheus.go:69] 100.00
I0722 15:41:13.920337       1 prometheus.go:69] 100.00
I0722 15:41:14.923441       1 prometheus.go:69] 100.00
I0722 15:41:15.923531       1 prometheus.go:69] 100.00
I0722 15:41:16.923751       1 prometheus.go:69] 100.00
I0722 15:41:17.924628       1 prometheus.go:69] 100.00
I0722 15:41:18.925469       1 prometheus.go:69] 100.00
I0722 15:41:19.925678       1 prometheus.go:69] 100.00
I0722 15:41:20.925893       1 prometheus.go:69] 100.00
I0722 15:41:21.926808       1 prometheus.go:69] 100.00
I0722 15:41:22.926900       1 prometheus.go:69] 100.00
I0722 15:41:23.927022       1 prometheus.go:69] 100.00
I0722 15:41:25.302952       1 prometheus.go:69] 100.00
I0722 15:41:26.303286       1 prometheus.go:69] 100.00
I0722 15:41:27.303509       1 prometheus.go:69] 100.00
I0722 15:41:28.304332       1 prometheus.go:69] 100.00
I0722 15:41:29.312974       1 prometheus.go:69] 100.00
I0722 15:41:30.313660       1 prometheus.go:69] 100.00
I0722 15:41:31.313766       1 prometheus.go:69] 100.00
I0722 15:41:32.313894       1 prometheus.go:69] 100.00
I0722 15:41:33.314193       1 prometheus.go:69] 100.00
I0722 15:41:34.314437       1 prometheus.go:69] 100.00
I0722 15:41:35.314679       1 prometheus.go:69] 100.00
I0722 15:41:36.492763       1 prometheus.go:69] 100.00
I0722 15:41:37.492977       1 prometheus.go:69] 100.00
I0722 15:41:38.493160       1 prometheus.go:69] 100.00
I0722 15:41:39.493321       1 prometheus.go:69] 100.00
I0722 15:41:40.493501       1 prometheus.go:69] 100.00
I0722 15:41:41.493669       1 prometheus.go:69] 100.00
I0722 15:41:42.493881       1 prometheus.go:69] 100.00
I0722 15:41:43.494008       1 prometheus.go:69] 100.00
I0722 15:41:44.494217       1 prometheus.go:69] 100.00
I0722 15:41:45.494395       1 prometheus.go:69] 100.00
I0722 15:41:46.495594       1 prometheus.go:69] 100.00
I0722 15:41:47.496220       1 prometheus.go:69] 100.00
I0722 15:41:48.611423       1 prometheus.go:69] 100.00
I0722 15:41:49.612615       1 prometheus.go:69] 100.00
I0722 15:41:50.612896       1 prometheus.go:69] 100.00
I0722 15:41:51.612986       1 prometheus.go:69] 100.00
I0722 15:41:52.614283       1 prometheus.go:69] 100.00
I0722 15:41:53.618359       1 prometheus.go:69] 100.00
I0722 15:41:54.621447       1 prometheus.go:69] 100.00
I0722 15:41:55.621877       1 prometheus.go:69] 100.00
I0722 15:41:56.621951       1 prometheus.go:69] 100.00
I0722 15:41:57.622127       1 prometheus.go:69] 100.00
I0722 15:41:58.622314       1 prometheus.go:69] 100.00
I0722 15:41:59.622442       1 prometheus.go:69] 100.00
I0722 15:42:00.622769       1 prometheus.go:69] 100.00
I0722 15:42:01.623027       1 prometheus.go:69] 100.00
I0722 15:42:02.623285       1 prometheus.go:69] 100.00
I0722 15:42:03.623441       1 prometheus.go:69] 100.00
I0722 15:42:04.624124       1 prometheus.go:69] 100.00
I0722 15:42:05.624322       1 prometheus.go:69] 100.00
I0722 15:42:06.624405       1 prometheus.go:69] 100.00
I0722 15:42:07.625299       1 prometheus.go:69] 100.00
I0722 15:42:08.625500       1 prometheus.go:69] 100.00
I0722 15:42:09.625651       1 prometheus.go:69] 100.00
I0722 15:42:10.625899       1 prometheus.go:69] 100.00
I0722 15:42:11.628000       1 prometheus.go:69] 100.00
I0722 15:42:12.628131       1 prometheus.go:69] 100.00
I0722 15:42:13.628316       1 prometheus.go:69] 100.00
I0722 15:42:14.628512       1 prometheus.go:69] 100.00
I0722 15:42:15.628728       1 prometheus.go:69] 100.00
I0722 15:42:16.628967       1 prometheus.go:69] 100.00
I0722 15:42:17.629106       1 prometheus.go:69] 100.00
I0722 15:42:18.629328       1 prometheus.go:69] 100.00
I0722 15:42:19.629478       1 prometheus.go:69] 100.00
I0722 15:42:20.630179       1 prometheus.go:69] 100.00
I0722 15:42:21.630449       1 prometheus.go:69] 100.00
I0722 15:42:22.635722       1 prometheus.go:69] 100.00
I0722 15:42:23.635894       1 prometheus.go:69] 100.00
I0722 15:42:24.635961       1 prometheus.go:69] 100.00
I0722 15:42:25.636065       1 prometheus.go:69] 100.00
I0722 15:42:26.636210       1 prometheus.go:69] 100.00
I0722 15:42:27.636393       1 prometheus.go:69] 100.00 oc get dv
NAME                                             PHASE              PROGRESS   RESTARTS   AGE
leon-four-nine-disk-0-mnjnd                      Succeeded          100.0%                7h2m
leon-four-nine-disk-1-x02q7                      Succeeded          100.0%                7h2m
v2v-rhel8-8028ba4a-2d6d-4e05-85c7-782e7cf3d241   ImportInProgress   100.00%               10m
[cnv-qe-jenkins@stg07-ying-kp8wh-executor ~]$  oc get pvc

I0722 15:42:28.637678       1 prometheus.go:69] 100.00
I0722 15:42:29.637927       1 prometheus.go:69] 100.00
I0722 15:42:30.638436       1 prometheus.go:69] 100.00
I0722 15:42:31.638674       1 prometheus.go:69] 100.00
I0722 15:42:32.638864       1 prometheus.go:69] 100.00
I0722 15:42:33.638929       1 prometheus.go:69] 100.00
I0722 15:42:34.639579       1 prometheus.go:69] 100.00
I0722 15:42:35.639655       1 prometheus.go:69] 100.00
I0722 15:42:36.641554       1 prometheus.go:69] 100.00
I0722 15:42:37.641645       1 prometheus.go:69] 100.00
I0722 15:42:38.641785       1 prometheus.go:69] 100.00
I0722 15:42:39.641963       1 prometheus.go:69] 100.00
I0722 15:42:40.642180       1 prometheus.go:69] 100.00
I0722 15:42:41.642360       1 prometheus.go:69] 100.00
I0722 15:42:42.642534       1 prometheus.go:69] 100.00
I0722 15:42:43.643664       1 prometheus.go:69] 100.00
I0722 15:42:44.643977       1 prometheus.go:69] 100.00
I0722 15:42:45.644051       1 prometheus.go:69] 100.00
I0722 15:42:46.644990       1 prometheus.go:69] 100.00
I0722 15:42:47.645396       1 prometheus.go:69] 100.00
I0722 15:42:48.645640       1 prometheus.go:69] 100.00
I0722 15:42:49.646009       1 prometheus.go:69] 100.00
I0722 15:42:50.646336       1 prometheus.go:69] 100.00
I0722 15:42:51.648298       1 prometheus.go:69] 100.00
I0722 15:42:52.649152       1 prometheus.go:69] 100.00
I0722 15:42:53.649416       1 prometheus.go:69] 100.00
I0722 15:42:54.649723       1 prometheus.go:69] 100.00
I0722 15:42:55.649974       1 prometheus.go:69] 100.00
I0722 15:42:56.650254       1 prometheus.go:69] 100.00
I0722 15:42:57.650556       1 prometheus.go:69] 100.00
I0722 15:42:58.651930       1 prometheus.go:69] 100.00
I0722 15:42:59.652065       1 prometheus.go:69] 100.00
I0722 15:43:00.652165       1 prometheus.go:69] 100.00
I0722 15:43:01.652943       1 prometheus.go:69] 100.00
I0722 15:43:02.653116       1 prometheus.go:69] 100.00
I0722 15:43:03.653491       1 prometheus.go:69] 100.00
I0722 15:43:04.653875       1 prometheus.go:69] 100.00
I0722 15:43:05.654081       1 prometheus.go:69] 100.00
I0722 15:43:06.655495       1 prometheus.go:69] 100.00
I0722 15:43:07.656089       1 prometheus.go:69] 100.00
I0722 15:43:08.656455       1 prometheus.go:69] 100.00
I0722 15:43:09.657007       1 prometheus.go:69] 100.00
I0722 15:43:10.657104       1 prometheus.go:69] 100.00
I0722 15:43:11.657391       1 prometheus.go:69] 100.00
I0722 15:43:12.657683       1 prometheus.go:69] 100.00
I0722 15:43:13.657889       1 prometheus.go:69] 100.00
I0722 15:43:14.658074       1 prometheus.go:69] 100.00
I0722 15:43:15.658195       1 prometheus.go:69] 100.00
I0722 15:43:16.658348       1 prometheus.go:69] 100.00
I0722 15:43:17.658530       1 prometheus.go:69] 100.00
I0722 15:43:18.663848       1 prometheus.go:69] 100.00
I0722 15:43:19.664046       1 prometheus.go:69] 100.00
I0722 15:43:20.664199       1 prometheus.go:69] 100.00
I0722 15:43:21.664366       1 prometheus.go:69] 100.00
I0722 15:43:22.664544       1 prometheus.go:69] 100.00
I0722 15:43:23.665002       1 prometheus.go:69] 100.00
I0722 15:43:24.665504       1 prometheus.go:69] 100.00
I0722 15:43:25.665677       1 prometheus.go:69] 100.00
I0722 15:43:26.666260       1 prometheus.go:69] 100.00
I0722 15:43:27.666504       1 prometheus.go:69] 100.00
I0722 15:43:28.672281       1 prometheus.go:69] 100.00
I0722 15:43:29.672900       1 prometheus.go:69] 100.00
I0722 15:43:30.673105       1 prometheus.go:69] 100.00
I0722 15:43:31.673261       1 prometheus.go:69] 100.00
I0722 15:43:32.673546       1 prometheus.go:69] 100.00
I0722 15:43:33.673695       1 prometheus.go:69] 100.00
I0722 15:43:34.674258       1 prometheus.go:69] 100.00
I0722 15:43:35.674431       1 prometheus.go:69] 100.00
I0722 15:43:36.674756       1 prometheus.go:69] 100.00
I0722 15:43:37.677850       1 prometheus.go:69] 100.00
I0722 15:43:38.678099       1 prometheus.go:69] 100.00
I0722 15:43:39.687484       1 prometheus.go:69] 100.00
I0722 15:43:40.687661       1 prometheus.go:69] 100.00
I0722 15:43:41.687966       1 prometheus.go:69] 100.00
I0722 15:43:42.688568       1 prometheus.go:69] 100.00
I0722 15:43:43.713010       1 prometheus.go:69] 100.00
I0722 15:43:44.713642       1 prometheus.go:69] 100.00
I0722 15:43:45.713834       1 prometheus.go:69] 100.00
I0722 15:43:46.713893       1 prometheus.go:69] 100.00
I0722 15:43:47.714869       1 prometheus.go:69] 100.00
I0722 15:43:48.715726       1 prometheus.go:69] 100.00
I0722 15:43:49.715902       1 prometheus.go:69] 100.00
I0722 15:43:50.716620       1 prometheus.go:69] 100.00
I0722 15:43:51.716728       1 prometheus.go:69] 100.00
I0722 15:43:52.716953       1 prometheus.go:69] 100.00
I0722 15:43:53.717572       1 prometheus.go:69] 100.00
I0722 15:43:54.717963       1 prometheus.go:69] 100.00
I0722 15:43:55.718447       1 prometheus.go:69] 100.00
I0722 15:43:56.730009       1 prometheus.go:69] 100.00
I0722 15:43:57.730192       1 prometheus.go:69] 100.00
I0722 15:43:58.730453       1 prometheus.go:69] 100.00
I0722 15:43:59.731945       1 prometheus.go:69] 100.00
I0722 15:44:00.732591       1 prometheus.go:69] 100.00
I0722 15:44:01.733081       1 prometheus.go:69] 100.00
I0722 15:44:02.733690       1 prometheus.go:69] 100.00
I0722 15:44:03.733867       1 prometheus.go:69] 100.00
I0722 15:44:04.734965       1 prometheus.go:69] 100.00
I0722 15:44:05.736768       1 prometheus.go:69] 100.00
I0722 15:44:06.737586       1 prometheus.go:69] 100.00
I0722 15:44:07.737715       1 prometheus.go:69] 100.00
I0722 15:44:08.738058       1 prometheus.go:69] 100.00
I0722 15:44:09.738369       1 prometheus.go:69] 100.00
I0722 15:44:10.738697       1 prometheus.go:69] 100.00
I0722 15:44:11.740027       1 prometheus.go:69] 100.00
I0722 15:44:12.740200       1 prometheus.go:69] 100.00
I0722 15:44:13.740677       1 prometheus.go:69] 100.00
I0722 15:44:14.741068       1 prometheus.go:69] 100.00
I0722 15:44:15.741211       1 prometheus.go:69] 100.00
I0722 15:44:16.741576       1 prometheus.go:69] 100.00
I0722 15:44:17.741908       1 prometheus.go:69] 100.00
I0722 15:44:18.742809       1 prometheus.go:69] 100.00
I0722 15:44:19.743503       1 prometheus.go:69] 100.00
I0722 15:44:20.743810       1 prometheus.go:69] 100.00
I0722 15:44:21.744761       1 prometheus.go:69] 100.00
I0722 15:44:22.745189       1 prometheus.go:69] 100.00
I0722 15:44:23.746317       1 prometheus.go:69] 100.00
I0722 15:44:24.746468       1 prometheus.go:69] 100.00
I0722 15:44:25.746598       1 prometheus.go:69] 100.00
I0722 15:44:26.747068       1 prometheus.go:69] 100.00
I0722 15:44:27.747272       1 prometheus.go:69] 100.00
I0722 15:44:28.748409       1 prometheus.go:69] 100.00
I0722 15:44:29.750025       1 prometheus.go:69] 100.00
I0722 15:44:30.750531       1 prometheus.go:69] 100.00
I0722 15:44:31.750684       1 prometheus.go:69] 100.00
I0722 15:44:32.750954       1 prometheus.go:69] 100.00
I0722 15:44:33.752113       1 prometheus.go:69] 100.00
I0722 15:44:34.752298       1 prometheus.go:69] 100.00
I0722 15:44:35.752538       1 prometheus.go:69] 100.00
I0722 15:44:36.753543       1 prometheus.go:69] 100.00
I0722 15:44:37.753640       1 prometheus.go:69] 100.00
I0722 15:44:38.754752       1 prometheus.go:69] 100.00
I0722 15:44:39.754941       1 prometheus.go:69] 100.00
I0722 15:44:40.764488       1 prometheus.go:69] 100.00
I0722 15:44:41.764653       1 prometheus.go:69] 100.00
I0722 15:44:42.769108       1 prometheus.go:69] 100.00
I0722 15:44:43.769326       1 prometheus.go:69] 100.00
I0722 15:44:44.769603       1 prometheus.go:69] 100.00
I0722 15:44:45.769949       1 prometheus.go:69] 100.00
I0722 15:44:46.770061       1 prometheus.go:69] 100.00
I0722 15:44:47.772874       1 prometheus.go:69] 100.00
I0722 15:44:48.773014       1 prometheus.go:69] 100.00
I0722 15:44:49.773227       1 prometheus.go:69] 100.00
I0722 15:44:50.773346       1 prometheus.go:69] 100.00
I0722 15:44:51.774351       1 prometheus.go:69] 100.00
I0722 15:44:52.774662       1 prometheus.go:69] 100.00
I0722 15:44:53.775952       1 prometheus.go:69] 100.00
I0722 15:44:54.776371       1 prometheus.go:69] 100.00
I0722 15:44:55.776654       1 prometheus.go:69] 100.00
I0722 15:44:56.776838       1 prometheus.go:69] 100.00
I0722 15:44:57.777240       1 prometheus.go:69] 100.00
I0722 15:44:58.778052       1 prometheus.go:69] 100.00
I0722 15:44:59.778445       1 prometheus.go:69] 100.00
I0722 15:45:00.778658       1 prometheus.go:69] 100.00
I0722 15:45:01.778849       1 prometheus.go:69] 100.00
I0722 15:45:02.779034       1 prometheus.go:69] 100.00
I0722 15:45:03.779932       1 prometheus.go:69] 100.00
I0722 15:45:04.780331       1 prometheus.go:69] 100.00
I0722 15:45:05.780505       1 prometheus.go:69] 100.00
I0722 15:45:06.780891       1 prometheus.go:69] 100.00
I0722 15:45:07.780979       1 prometheus.go:69] 100.00
I0722 15:45:08.781843       1 prometheus.go:69] 100.00
I0722 15:45:09.782330       1 prometheus.go:69] 100.00
I0722 15:45:10.783395       1 prometheus.go:69] 100.00
I0722 15:45:11.784897       1 prometheus.go:69] 100.00
I0722 15:45:12.787224       1 prometheus.go:69] 100.00
I0722 15:45:13.787938       1 prometheus.go:69] 100.00
I0722 15:45:14.788268       1 prometheus.go:69] 100.00
I0722 15:45:15.788412       1 prometheus.go:69] 100.00
I0722 15:45:16.788725       1 prometheus.go:69] 100.00
I0722 15:45:17.812899       1 prometheus.go:69] 100.00
I0722 15:45:18.813110       1 prometheus.go:69] 100.00
I0722 15:45:19.813331       1 prometheus.go:69] 100.00
I0722 15:45:20.813453       1 prometheus.go:69] 100.00
I0722 15:45:21.813645       1 prometheus.go:69] 100.00
I0722 15:45:22.821532       1 prometheus.go:69] 100.00
I0722 15:45:23.822705       1 prometheus.go:69] 100.00
I0722 15:45:24.822969       1 prometheus.go:69] 100.00
I0722 15:45:25.824040       1 prometheus.go:69] 100.00
I0722 15:45:26.824202       1 prometheus.go:69] 100.00
I0722 15:45:27.824377       1 prometheus.go:69] 100.00
I0722 15:45:28.825952       1 prometheus.go:69] 100.00
I0722 15:45:29.826469       1 prometheus.go:69] 100.00
I0722 15:45:30.827545       1 prometheus.go:69] 100.00
I0722 15:45:31.827696       1 prometheus.go:69] 100.00
I0722 15:45:32.828459       1 prometheus.go:69] 100.00
I0722 15:45:33.829853       1 prometheus.go:69] 100.00
I0722 15:45:34.830147       1 prometheus.go:69] 100.00
I0722 15:45:35.830318       1 prometheus.go:69] 100.00
I0722 15:45:36.830925       1 prometheus.go:69] 100.00
I0722 15:45:37.831203       1 prometheus.go:69] 100.00
I0722 15:45:38.831471       1 prometheus.go:69] 100.00
I0722 15:45:39.831736       1 prometheus.go:69] 100.00
I0722 15:45:40.832580       1 prometheus.go:69] 100.00
I0722 15:45:41.832786       1 prometheus.go:69] 100.00
I0722 15:45:42.832988       1 prometheus.go:69] 100.00
I0722 15:45:43.833518       1 prometheus.go:69] 100.00
I0722 15:45:44.834367       1 prometheus.go:69] 100.00
I0722 15:45:45.834709       1 prometheus.go:69] 100.00
I0722 15:45:46.835138       1 prometheus.go:69] 100.00
I0722 15:45:47.835411       1 prometheus.go:69] 100.00
I0722 15:45:48.835594       1 prometheus.go:69] 100.00
I0722 15:45:49.835700       1 prometheus.go:69] 100.00
I0722 15:45:50.835877       1 prometheus.go:69] 100.00
I0722 15:45:51.836162       1 prometheus.go:69] 100.00
I0722 15:45:52.836412       1 prometheus.go:69] 100.00
I0722 15:45:53.836599       1 prometheus.go:69] 100.00
I0722 15:45:54.836716       1 prometheus.go:69] 100.00
I0722 15:45:55.836973       1 prometheus.go:69] 100.00
I0722 15:45:56.837754       1 prometheus.go:69] 100.00
I0722 15:45:57.837885       1 prometheus.go:69] 100.00
I0722 15:45:58.838116       1 prometheus.go:69] 100.00
I0722 15:45:59.838336       1 prometheus.go:69] 100.00
I0722 15:46:00.838910       1 prometheus.go:69] 100.00
I0722 15:46:01.839158       1 prometheus.go:69] 100.00
I0722 15:46:02.839354       1 prometheus.go:69] 100.00
I0722 15:46:03.839615       1 prometheus.go:69] 100.00
I0722 15:46:04.839832       1 prometheus.go:69] 100.00
I0722 15:46:05.840021       1 prometheus.go:69] 100.00
I0722 15:46:06.841114       1 prometheus.go:69] 100.00
I0722 15:46:07.841317       1 prometheus.go:69] 100.00
I0722 15:46:08.841543       1 prometheus.go:69] 100.00


Note You need to log in before you can comment on or make changes to this bug.