Bug 1392377 - when multiple pv claim request are made, heketi ends up creating more volumes than requested
Summary: when multiple pv claim request are made, heketi ends up creating more volumes...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: cns-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: CNS 3.4
Assignee: Humble Chirammal
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On: 1346621
Blocks: 1385247
TreeView+ depends on / blocked
 
Reported: 2016-11-07 11:24 UTC by krishnaram Karthick
Modified: 2018-12-06 05:58 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-18 21:56:47 UTC
Embargoed:


Attachments (Terms of Use)
heketi_logs_attached (10.95 MB, text/plain)
2016-11-11 05:48 UTC, krishnaram Karthick
no flags Details
heketi_logs_comment23 (18.27 MB, text/plain)
2016-11-15 04:30 UTC, krishnaram Karthick
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0148 0 normal SHIPPED_LIVE heketi bug fix and enhancement update 2017-01-19 02:53:24 UTC

Description krishnaram Karthick 2016-11-07 11:24:38 UTC
Description of problem:
with two trusted storage pools in CNS cluster, when 100 pv claim requests are made more than 100 volumes are created in the backend.

Everytime pv claim fails for some reason and retried, a volume is re-created. deletion of previous volume doesn't seem to take place.

vol creation has been re-tried several times that 2TB of disk space is used up and only 4 pv claims have succeeded.

Version-Release number of selected component (if applicable):
[root@dhcp47-112 ~]# rpm -qa | grep 'openshift'
openshift-ansible-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-roles-3.4.16-1.git.0.c846018.el7.noarch
atomic-openshift-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-utils-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-docs-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-lookup-plugins-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-filter-plugins-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-playbooks-3.4.16-1.git.0.c846018.el7.noarch
atomic-openshift-clients-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-node-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-master-3.4.0.19-1.git.0.346a31d.el7.x86_64
openshift-ansible-callback-plugins-3.4.16-1.git.0.c846018.el7.noarch
tuned-profiles-atomic-openshift-node-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-sdn-ovs-3.4.0.19-1.git.0.346a31d.el7.x86_64

docker-1.10.3-46.el7.14.x86_64


How reproducible:
1/1

Steps to Reproduce:
1. create 100 pv claim request with a gap of 3 seconds between each claim
2. wait for 100 pvc to be created


Actual results:
[root@dhcp47-112 ~]# oc get pvc
NAME       STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim1     Bound     pvc-92859582-a1aa-11e6-a39b-005056b3a033   12Gi       RWO           4d
claim10    Pending                                                                       2h
claim100   Pending                                                                       2h
claim11    Pending                                                                       2h
claim12    Pending                                                                       2h
claim13    Pending                                                                       2h
claim14    Pending                                                                       2h
claim15    Pending                                                                       2h
claim16    Pending                                                                       2h
claim17    Pending                                                                       2h
claim18    Pending                                                                       2h
claim19    Pending                                                                       2h
claim20    Pending                                                                       2h
claim21    Pending                                                                       2h
claim22    Pending                                                                       2h
claim23    Pending                                                                       2h
claim24    Pending                                                                       2h
claim25    Pending                                                                       2h
claim26    Pending                                                                       2h
claim27    Pending                                                                       2h
claim28    Pending                                                                       2h
claim29    Pending                                                                       2h
claim3     Bound     pvc-45e234c1-a4c6-11e6-a39b-005056b3a033   5Gi        RWO           2h
claim30    Pending                                                                       2h
claim31    Pending                                                                       2h
claim32    Pending                                                                       2h
claim33    Pending                                                                       2h
claim34    Pending                                                                       2h
claim35    Pending                                                                       2h
claim36    Pending                                                                       2h
claim37    Pending                                                                       2h
claim38    Pending                                                                       2h
claim39    Pending                                                                       2h
claim4     Pending                                                                       2h
claim40    Pending                                                                       2h
claim41    Pending                                                                       2h
claim42    Pending                                                                       2h
claim43    Pending                                                                       2h
claim44    Pending                                                                       2h
claim45    Pending                                                                       2h
claim46    Pending                                                                       2h
claim47    Pending                                                                       2h
claim48    Bound     pvc-aaeb757f-a4c6-11e6-a39b-005056b3a033   5Gi        RWO           2h
claim49    Pending                                                                       2h
claim5     Bound     pvc-56a670c4-a4c6-11e6-a39b-005056b3a033   5Gi        RWO           2h
claim50    Pending                                                                       2h
claim51    Pending                                                                       2h
claim6     Pending                                                                       2h
claim7     Bound     pvc-5a941e26-a4c6-11e6-a39b-005056b3a033   5Gi        RWO           2h
claim8     Pending                                                                       2h
claim9     Pending                                                                       2h



Expected results:
when pv claim fails for some reason, volume created by heketi should also be deleted.

Additional info:
logs shall be attached shortly.

Comment 2 Humble Chirammal 2016-11-07 14:00:55 UTC
The fix for this issue is available in latest OCP build.

https://github.com/openshift/origin/pull/11722

The OCP build mentioned in the bugzilla looks to be bit old (built on 02-Nov ) .Please retest in latest build and find the result ?

Comment 6 krishnaram Karthick 2016-11-08 05:48:13 UTC
The behavior is still seen with the latest available downstream build for openshift.

rpm -qa | grep 'openshift'
openshift-ansible-callback-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
atomic-openshift-master-3.4.0.23-1.git.0.24b1a58.el7.x86_64
openshift-ansible-docs-3.4.17-1.git.0.4698b0c.el7.noarch
openshift-ansible-filter-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
atomic-openshift-clients-3.4.0.23-1.git.0.24b1a58.el7.x86_64
openshift-ansible-roles-3.4.17-1.git.0.4698b0c.el7.noarch
tuned-profiles-atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
atomic-openshift-sdn-ovs-3.4.0.23-1.git.0.24b1a58.el7.x86_64
openshift-ansible-3.4.17-1.git.0.4698b0c.el7.noarch
openshift-ansible-lookup-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
atomic-openshift-3.4.0.23-1.git.0.24b1a58.el7.x86_64
openshift-ansible-playbooks-3.4.17-1.git.0.4698b0c.el7.noarch
atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
atomic-openshift-utils-3.4.17-1.git.0.4698b0c.el7.noarch

Can you please confirm the downstream openshift build which has the fix? Else, we'll end up spending time on repeating tests without making meaningful progress. cleaning up the test system after hitting such issues also consumes a lot of time.

Comment 7 Humble Chirammal 2016-11-08 06:00:48 UTC
(In reply to krishnaram Karthick from comment #6)
> The behavior is still seen with the latest available downstream build for
> openshift.
> 
> rpm -qa | grep 'openshift'
> openshift-ansible-callback-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
> atomic-openshift-master-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> openshift-ansible-docs-3.4.17-1.git.0.4698b0c.el7.noarch
> openshift-ansible-filter-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
> atomic-openshift-clients-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> openshift-ansible-roles-3.4.17-1.git.0.4698b0c.el7.noarch
> tuned-profiles-atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> atomic-openshift-sdn-ovs-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> openshift-ansible-3.4.17-1.git.0.4698b0c.el7.noarch
> openshift-ansible-lookup-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
> atomic-openshift-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> openshift-ansible-playbooks-3.4.17-1.git.0.4698b0c.el7.noarch
> atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> atomic-openshift-utils-3.4.17-1.git.0.4698b0c.el7.noarch
> 
> Can you please confirm the downstream openshift build which has the fix?
> Else, we'll end up spending time on repeating tests without making
> meaningful progress. cleaning up the test system after hitting such issues
> also consumes a lot of time.

Why the package list has different versions ( mix of 3.4.0.23 and 3.4.0.17-1) of packages? How did you update? If possible can you please do a fresh installation with latest package and check ?

Comment 8 Humble Chirammal 2016-11-08 06:05:35 UTC
Karthik, please refer this bug https://bugzilla.redhat.com/show_bug.cgi?id=1388868 . The mentioned bug is verified with below versions.

openshift v3.4.0.22+5c56720
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

Comment 9 krishnaram Karthick 2016-11-08 14:27:28 UTC
(In reply to Humble Chirammal from comment #7)
> (In reply to krishnaram Karthick from comment #6)
> > The behavior is still seen with the latest available downstream build for
> > openshift.
> > 
> > rpm -qa | grep 'openshift'
> > openshift-ansible-callback-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
> > atomic-openshift-master-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> > openshift-ansible-docs-3.4.17-1.git.0.4698b0c.el7.noarch
> > openshift-ansible-filter-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
> > atomic-openshift-clients-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> > openshift-ansible-roles-3.4.17-1.git.0.4698b0c.el7.noarch
> > tuned-profiles-atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> > atomic-openshift-sdn-ovs-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> > openshift-ansible-3.4.17-1.git.0.4698b0c.el7.noarch
> > openshift-ansible-lookup-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
> > atomic-openshift-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> > openshift-ansible-playbooks-3.4.17-1.git.0.4698b0c.el7.noarch
> > atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
> > atomic-openshift-utils-3.4.17-1.git.0.4698b0c.el7.noarch
> > 
> > Can you please confirm the downstream openshift build which has the fix?
> > Else, we'll end up spending time on repeating tests without making
> > meaningful progress. cleaning up the test system after hitting such issues
> > also consumes a lot of time.
> 
> Why the package list has different versions ( mix of 3.4.0.23 and
> 3.4.0.17-1) of packages? How did you update? If possible can you please do a
> fresh installation with latest package and check ?

The packages are different coz 3.4.17 are ansible packages. This is not a mix of packages.

Here is the output of openshift version,

[root@dhcp47-112 ~]# openshift version
openshift v3.4.0.23+24b1a58
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

I'd be happy to do a fresh installation if you can point out the exact reason or issue you find(if any) with the setup based on any evidence, say for example from the logs.

sosreports for the latest run are available here --> http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1392377/

Moving the bug to assigned state.

Comment 10 Humble Chirammal 2016-11-08 15:27:42 UTC
(In reply to krishnaram Karthick from comment #9)
> (In reply to Humble Chirammal from comment #7)
> > (In reply to krishnaram Karthick from comment #6)

> 
> The packages are different coz 3.4.17 are ansible packages. This is not a
> mix of packages.
> 
> Here is the output of openshift version,
> 
> [root@dhcp47-112 ~]# openshift version
> openshift v3.4.0.23+24b1a58
> kubernetes v1.4.0+776c994
> etcd 3.1.0-rc.0
> 
> I'd be happy to do a fresh installation if you can point out the exact
> reason or issue you find(if any) with the setup based on any evidence, say
> for example from the logs.
> 
> sosreports for the latest run are available here -->
> http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1392377/
> 
> Moving the bug to assigned state.

Can you address the question which I raised in c#7 , how did you upgrade ? Did you reboot the nodes or performed any extra steps ?  The reason why I am asking this is, as mentioned in c#8, the suspected bug has been verified with Openshift v3.4.0.22+ version. I would like to roll out the possibilities of some left over from the previous version.

As an additional note, apart from the fix I mentioned, the only reason which can cause this behaviour is "Heketi" not responding to the delete or further requests.

Comment 11 krishnaram Karthick 2016-11-10 05:57:46 UTC
(In reply to Humble Chirammal from comment #10)
> (In reply to krishnaram Karthick from comment #9)
> > (In reply to Humble Chirammal from comment #7)
> > > (In reply to krishnaram Karthick from comment #6)
> 
> > 
> > The packages are different coz 3.4.17 are ansible packages. This is not a
> > mix of packages.
> > 
> > Here is the output of openshift version,
> > 
> > [root@dhcp47-112 ~]# openshift version
> > openshift v3.4.0.23+24b1a58
> > kubernetes v1.4.0+776c994
> > etcd 3.1.0-rc.0
> > 
> > I'd be happy to do a fresh installation if you can point out the exact
> > reason or issue you find(if any) with the setup based on any evidence, say
> > for example from the logs.
> > 
> > sosreports for the latest run are available here -->
> > http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1392377/
> > 
> > Moving the bug to assigned state.
> 
> Can you address the question which I raised in c#7 , how did you upgrade ?
> Did you reboot the nodes or performed any extra steps ?  The reason why I am
> asking this is, as mentioned in c#8, the suspected bug has been verified
> with Openshift v3.4.0.22+ version. I would like to roll out the
> possibilities of some left over from the previous version.
> 
> As an additional note, apart from the fix I mentioned, the only reason which
> can cause this behaviour is "Heketi" not responding to the delete or further
> requests.

To put aside any doubt w.r.t the setup/upgrade issues, I've reproduced this issue on a fresh openshift cluster.

[root@dhcp46-146 ~]# openshift version
openshift v3.4.0.23+24b1a58
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0
[root@dhcp46-146 ~]# rpm -qa | grep 'openshift'
atomic-openshift-sdn-ovs-3.4.0.23-1.git.0.24b1a58.el7.x86_64
openshift-ansible-filter-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
atomic-openshift-utils-3.4.17-1.git.0.4698b0c.el7.noarch
atomic-openshift-clients-3.4.0.23-1.git.0.24b1a58.el7.x86_64
atomic-openshift-master-3.4.0.23-1.git.0.24b1a58.el7.x86_64
tuned-profiles-atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
openshift-ansible-docs-3.4.17-1.git.0.4698b0c.el7.noarch
openshift-ansible-callback-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
openshift-ansible-lookup-plugins-3.4.17-1.git.0.4698b0c.el7.noarch
openshift-ansible-playbooks-3.4.17-1.git.0.4698b0c.el7.noarch
atomic-openshift-3.4.0.23-1.git.0.24b1a58.el7.x86_64
atomic-openshift-node-3.4.0.23-1.git.0.24b1a58.el7.x86_64
openshift-ansible-3.4.17-1.git.0.4698b0c.el7.noarch
openshift-ansible-roles-3.4.17-1.git.0.4698b0c.el7.noarch


Before running the test:

[root@dhcp46-146 ~]# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim1    Bound     pvc-38ddf4b7-a6a3-11e6-a52d-005056b380ec   5Gi        RWO           1h
claim2    Bound     pvc-6b4321c8-a6a3-11e6-a52d-005056b380ec   5Gi        ROX           1h
claim3    Bound     pvc-617f3f4d-a6a5-11e6-a52d-005056b380ec   5Gi        RWX           1h

[root@dhcp46-146 ~]# heketi-cli volume list
Id:5fcf106341f13b5933e30e0fd7a80a1d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_5fcf106341f13b5933e30e0fd7a80a1d
Id:6524779fda62e0cba091136bb5e5fc99    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:heketidbstorage
Id:eab4e69a1472a808f0858c45ec5643d5    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_eab4e69a1472a808f0858c45ec5643d5
Id:f10ac1c8e702be052fa13484ca7e9648    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_f10ac1c8e702be052fa13484ca7e9648

Test:
1) dynamic provisioning requests with the following command
for i in {4..49}; do oc create -f claim$i; sleep 3; done

After running the test:

[root@dhcp46-146 ~]# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim1    Bound     pvc-38ddf4b7-a6a3-11e6-a52d-005056b380ec   5Gi        RWO           16h
claim2    Bound     pvc-6b4321c8-a6a3-11e6-a52d-005056b380ec   5Gi        ROX           16h
claim3    Bound     pvc-617f3f4d-a6a5-11e6-a52d-005056b380ec   5Gi        RWX           15h
claim36   Pending                                                                       46m
claim37   Pending                                                                       46m
claim38   Pending                                                                       46m
claim39   Pending                                                                       45m
claim4    Pending                                                                       47m
claim40   Pending                                                                       45m
claim41   Pending                                                                       45m
claim42   Pending                                                                       45m
claim43   Pending                                                                       45m
claim44   Pending                                                                       45m
claim45   Bound     pvc-04517871-a725-11e6-a52d-005056b380ec   5Gi        RWO           45m
claim46   Pending                                                                       45m
claim47   Pending                                                                       45m
claim48   Pending                                                                       45m
claim49   Pending                                                                       45m
claim5    Pending                                                                       47m
claim50   Bound     pvc-a21b3f91-a724-11e6-a52d-005056b380ec   5Gi        RWO           48m
claim6    Pending                                                                       47m
claim7    Pending                                                                       47m
claim8    Pending                                                                       47m

# heketi-cli volume list | wc -l
113

Logs are available here --> http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1392377/build-v3.4.0.23/

Comment 12 krishnaram Karthick 2016-11-10 06:17:01 UTC
This issue exhausted the available free storage in the trusted storage pool.

oc describe pvc/claim36
Name:           claim36
Namespace:      storage-project
StorageClass:   slow
Status:         Pending
Volume:
Labels:         <none>
Capacity:
Access Modes:
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason      Message
  ---------     --------        -----   ----                            -------------   --------        ------      -------
  1h            54m             2       {persistentvolume-controller }                  Warning         ProvisioningFailed   Failed to provision volume with StorageClass "slow": glusterfs: create volume err: failed to get hostip Id not found
. 
  44m   37m     19      {persistentvolume-controller }          Warning ProvisioningFailed      Failed to provision volume with StorageClass "slow": glusterfs: create volume err: error creating volume .
  54m   9s      168     {persistentvolume-controller }          Warning ProvisioningFailed      Failed to provision volume with StorageClass "slow": glusterfs: create volume err: error creating volume No space

Comment 18 krishnaram Karthick 2016-11-11 05:48:41 UTC
Created attachment 1219619 [details]
heketi_logs_attached

Comment 19 Humble Chirammal 2016-11-11 06:34:24 UTC
(In reply to krishnaram Karthick from comment #17)

> > Also I have few suggestions here:
> > 
> > *) In the below loop, can you please :
> > 
> > - reduce the count of volumes and go sequentially (10, 20, 40...etc)?
> > - also increase the sleep to "180" seconds ?
> > 
> > for i in {1..10}; do oc create -f claim$i; sleep 180; done
> > 
> 
> The issue is not seen when the sleep time between two claim requests is
> increased. However, two concurrent claim request do end up in the issue
> reported.
> 

Thats good news! 
To summarize, if you are creating volumes in a loop with some more delay ( ex: 180) this issue is not observed.

But you mention, 

"
However, two concurrent claim request do end up in the issue
reported.
"

I failed to interpret the new issue. Can you please explain how are you performing this, ie the Steps to reproduce ?

Comment 21 Humble Chirammal 2016-11-11 10:14:54 UTC
(In reply to krishnaram Karthick from comment #20)
> (In reply to Humble Chirammal from comment #19)

> > I failed to interpret the new issue. Can you please explain how are you
> > performing this, ie the Steps to reproduce ?
> 
> This is not a new issue, this is same as I mentioned in c#11. By concurrent
> request I mean, trying to create a pvc when an existing pvc request has not
> completed yet.

In short, with a sleep >=50 seconds in below loop [1], this scale test ( creating many volumes) work without any issue.

[1] for i in {1..10}; do oc create -f claim$i; sleep 60; done

But, in absense of sleep or very negligible sleep value, all the requests are not satisfied.

I will look into the logs and try to find out the RCA. But really appreciated if you can provide the timestamp of issue occurrence.

Comment 22 Michael Adam 2016-11-11 11:16:34 UTC
It is not entirely clear to me what the actual bug is:

The "Pending" pvc create requests are completely valid.
If many concurrent requests are placed, some will just take
much longer to be completed.

So does the output of  "oc get pvc" change, when you repeat running it (repeatedly) after having places the many pvc create requests?

Please also attach full output of "heketi-cli volume list", not just line count of its output.


Thanks - Michael

Comment 23 krishnaram Karthick 2016-11-15 04:19:11 UTC
(In reply to Humble Chirammal from comment #21)
> (In reply to krishnaram Karthick from comment #20)
> > (In reply to Humble Chirammal from comment #19)
> 
> > > I failed to interpret the new issue. Can you please explain how are you
> > > performing this, ie the Steps to reproduce ?
> > 
> > This is not a new issue, this is same as I mentioned in c#11. By concurrent
> > request I mean, trying to create a pvc when an existing pvc request has not
> > completed yet.
> 
> In short, with a sleep >=50 seconds in below loop [1], this scale test (
> creating many volumes) work without any issue.
> 
> [1] for i in {1..10}; do oc create -f claim$i; sleep 60; done
> 
> But, in absense of sleep or very negligible sleep value, all the requests
> are not satisfied.
> 
> I will look into the logs and try to find out the RCA. But really
> appreciated if you can provide the timestamp of issue occurrence.

Humble, I've recreated the issue once again by noting the timestamp. Attaching the heketi logs. Also I've pasted the entire output of 'oc get pvc' and 'heketi-cli volume list'

But I believe, debugging would be easier if you can reproduce the issue yourself or have a look at the live system. I'll leave that to you anyways.

15/11/16 09:02:59 for i in {1..97}; do oc create -f claim$i; sleep 3; done

[root@dhcp46-146 ~]# oc get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
claim1    Pending                                      37m
claim10   Pending                                      37m
claim11   Pending                                      37m
claim12   Pending                                      36m
claim13   Pending                                      36m
claim14   Pending                                      36m
claim15   Pending                                      36m
claim16   Pending                                      36m
claim17   Pending                                      36m
claim18   Pending                                      36m
claim19   Pending                                      36m
claim2    Pending                                      37m
claim20   Pending                                      36m
claim21   Pending                                      36m
claim22   Pending                                      36m
claim23   Pending                                      36m
claim24   Pending                                      36m
claim25   Pending                                      36m
claim26   Pending                                      36m
claim27   Pending                                      36m
claim28   Pending                                      36m
claim29   Pending                                      36m
claim3    Pending                                      37m
claim30   Pending                                      35m
claim31   Pending                                      35m
claim32   Pending                                      35m
claim33   Pending                                      35m
claim34   Pending                                      35m
claim35   Pending                                      35m
claim36   Pending                                      35m
claim37   Pending                                      35m
claim38   Pending                                      35m
claim39   Pending                                      35m
claim4    Pending                                      37m
claim40   Pending                                      35m
claim41   Pending                                      35m
claim42   Pending                                      35m
claim43   Pending                                      35m
claim44   Pending                                      35m
claim45   Pending                                      35m
claim46   Pending                                      35m
claim47   Pending                                      35m
claim48   Pending                                      34m
claim49   Pending                                      34m
claim5    Pending                                      37m
claim50   Pending                                      34m
claim51   Pending                                      34m
claim53   Pending                                      34m
claim54   Pending                                      34m
claim55   Pending                                      34m
claim56   Pending                                      34m
claim57   Pending                                      34m
claim58   Pending                                      34m
claim59   Pending                                      34m
claim6    Pending                                      37m
claim60   Pending                                      34m
claim61   Pending                                      34m
claim62   Pending                                      34m
claim63   Pending                                      34m
claim64   Pending                                      34m
claim65   Pending                                      33m
claim66   Pending                                      33m
claim67   Pending                                      33m
claim68   Pending                                      33m
claim69   Pending                                      33m
claim7    Pending                                      37m
claim70   Pending                                      33m
claim71   Pending                                      33m
claim72   Pending                                      33m
claim73   Pending                                      33m
claim74   Pending                                      33m
claim75   Pending                                      33m
claim76   Pending                                      33m
claim77   Pending                                      33m
claim78   Pending                                      33m
claim79   Pending                                      33m
claim8    Pending                                      37m
claim80   Pending                                      33m
claim81   Pending                                      33m
claim82   Pending                                      33m
claim83   Pending                                      32m
claim84   Pending                                      32m
claim85   Pending                                      32m
claim86   Pending                                      32m
claim87   Pending                                      32m
claim88   Pending                                      32m
claim89   Pending                                      32m
claim9    Pending                                      37m
claim90   Pending                                      32m
claim91   Pending                                      32m
claim92   Pending                                      32m
claim93   Pending                                      32m
claim94   Pending                                      32m
claim95   Pending                                      32m
claim96   Pending                                      32m
claim97   Pending                                      32m

[root@dhcp46-146 ~]# heketi-cli volume list
Id:004650a0d835ce5625d5b8036e1ec467    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_004650a0d835ce5625d5b8036e1ec467
Id:038a01ca50b0377035a87e1c066e8308    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_038a01ca50b0377035a87e1c066e8308
Id:03edeef13e4a00a4c531ad8778b700c9    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_03edeef13e4a00a4c531ad8778b700c9
Id:043e213ed3e6141d245184d1a6d465a4    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_043e213ed3e6141d245184d1a6d465a4
Id:04dea1f8b8e50b25525d02c0cb21c636    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_04dea1f8b8e50b25525d02c0cb21c636
Id:0861f6802d409abd4a9d720490c4b589    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_0861f6802d409abd4a9d720490c4b589
Id:0a54acefbe0596804f2e09fc1b0e5522    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_0a54acefbe0596804f2e09fc1b0e5522
Id:0c4589fb1e3d456b8941c46d59ea7297    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_0c4589fb1e3d456b8941c46d59ea7297
Id:0d79ae15005dd182e9c2f8cf094c3036    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_0d79ae15005dd182e9c2f8cf094c3036
Id:0e12a0bdde4fcda7fa495af4d0b29d8a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_0e12a0bdde4fcda7fa495af4d0b29d8a
Id:0ef5e5a927436e718511185ee27f9828    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_0ef5e5a927436e718511185ee27f9828
Id:121bd70147556511604697729ca8d5ed    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_121bd70147556511604697729ca8d5ed
Id:15320764712f808a3bd912ff82bd9447    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_15320764712f808a3bd912ff82bd9447
Id:1ac7a3ef21b8a86b8b23ef00c52a99ba    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_1ac7a3ef21b8a86b8b23ef00c52a99ba
Id:1b225791493e0fc2161d2e5ac26df6d4    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_1b225791493e0fc2161d2e5ac26df6d4
Id:1e5967f1a93b630e2691d2b2331430f4    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_1e5967f1a93b630e2691d2b2331430f4
Id:2689086bc361b2a1b7f1eb52400f10e8    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_2689086bc361b2a1b7f1eb52400f10e8
Id:2977c9163d8a59ca1b6d4e87c00bcc3c    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_2977c9163d8a59ca1b6d4e87c00bcc3c
Id:29952e36a3b5553832f6cbac6dbfcdfb    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_29952e36a3b5553832f6cbac6dbfcdfb
Id:2a9cfd930c32db830aab94ba2ae5e22d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_2a9cfd930c32db830aab94ba2ae5e22d
Id:2ae470f3d8b5839ca1ef7382dd82cd2a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_2ae470f3d8b5839ca1ef7382dd82cd2a
Id:2d7df563fe4f5fb16b506dbcf2f195a5    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_2d7df563fe4f5fb16b506dbcf2f195a5
Id:2fa18166b9d089bafcd094e3332584ac    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_2fa18166b9d089bafcd094e3332584ac
Id:309326c0e93dc6e7f511254744c12468    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_309326c0e93dc6e7f511254744c12468
Id:353f0c02b5e5609cac586dabd1077673    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_353f0c02b5e5609cac586dabd1077673
Id:36e8df01667b3fbc7156cf72788422b0    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_36e8df01667b3fbc7156cf72788422b0
Id:3a3c10a5bfbdcf8604a4d1ee81488e05    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_3a3c10a5bfbdcf8604a4d1ee81488e05
Id:3c726832fe30e0bcf2ef1194dcc1e1a3    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_3c726832fe30e0bcf2ef1194dcc1e1a3
Id:3ce516a026134f8af95a762b07baaff2    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_3ce516a026134f8af95a762b07baaff2
Id:3e6786416caf259c952d240f3a74a80c    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_3e6786416caf259c952d240f3a74a80c
Id:3e736e40ddb1ab930089e34d9b4b706f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_3e736e40ddb1ab930089e34d9b4b706f
Id:3ed843896ec6b6145201631814dab506    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_3ed843896ec6b6145201631814dab506
Id:401f3c63a1ca5786a1951fbaf5611dcd    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_401f3c63a1ca5786a1951fbaf5611dcd
Id:41f6fed9cbadc68e783e71ea9203b370    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_41f6fed9cbadc68e783e71ea9203b370
Id:435c589e584d1d42bb1bcd1ebb07af57    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_435c589e584d1d42bb1bcd1ebb07af57
Id:43ef5f74a91e7da76cb6348593a10aca    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_43ef5f74a91e7da76cb6348593a10aca
Id:446e93915ecbf6a061f302c8e146a586    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_446e93915ecbf6a061f302c8e146a586
Id:45c9729216198a0450b1a5c171676c5d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_45c9729216198a0450b1a5c171676c5d
Id:4673554db2e8b7ee977a9159a3089bb2    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4673554db2e8b7ee977a9159a3089bb2
Id:480697cee9070c98adbe8838b178ef64    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_480697cee9070c98adbe8838b178ef64
Id:49c681458bae505f2ebedb849bf3c3e4    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_49c681458bae505f2ebedb849bf3c3e4
Id:4bd0263dda279dafe5582801d7339a46    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4bd0263dda279dafe5582801d7339a46
Id:4c4b6021094dd6b5b287785120f4bfdf    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4c4b6021094dd6b5b287785120f4bfdf
Id:4d2cc57f51e8ef686187112a74c7a6e3    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4d2cc57f51e8ef686187112a74c7a6e3
Id:4de9d982d26081ae19826d63937e3ea0    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4de9d982d26081ae19826d63937e3ea0
Id:4e53ca0ada73f7a88bf581e4bc6b11f7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4e53ca0ada73f7a88bf581e4bc6b11f7
Id:4e66d2e73b5ac1df0af6f3fa276d2c6f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4e66d2e73b5ac1df0af6f3fa276d2c6f
Id:4f28fdd83e47233b814ee1b51f5dcba2    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4f28fdd83e47233b814ee1b51f5dcba2
Id:4fb97fb70a02970f234f20adf6b4f39d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_4fb97fb70a02970f234f20adf6b4f39d
Id:50245c6edf358242b9441b8f1d6aaaf7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_50245c6edf358242b9441b8f1d6aaaf7
Id:51fd8bc1847c1f22808de5636c728373    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_51fd8bc1847c1f22808de5636c728373
Id:533614ae196b76df4c6aedb9526ea218    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_533614ae196b76df4c6aedb9526ea218
Id:535b6261ed35966f188a5ab4fec1f9d7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_535b6261ed35966f188a5ab4fec1f9d7
Id:544c98540d6aef7b06087f40afdaa8ed    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_544c98540d6aef7b06087f40afdaa8ed
Id:549f649fa1a432b37c5fd250417a5fd7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_549f649fa1a432b37c5fd250417a5fd7
Id:56bc7606cdc381ae00ae6db798372820    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_56bc7606cdc381ae00ae6db798372820
Id:58d47dccae64ed17deb058ec35b810af    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_58d47dccae64ed17deb058ec35b810af
Id:59684b42d8a154ef5375808f4fa18d60    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_59684b42d8a154ef5375808f4fa18d60
Id:5b714596f3e90753bb39d781adfde1e6    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_5b714596f3e90753bb39d781adfde1e6
Id:5c4522de13445c9e4427a2a4c68bfbc9    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_5c4522de13445c9e4427a2a4c68bfbc9
Id:5fab21c055cbfd0ee1ffa6dd96e1f77d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_5fab21c055cbfd0ee1ffa6dd96e1f77d
Id:606e186973463b0d7af368ace3365a39    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_606e186973463b0d7af368ace3365a39
Id:60ffa4580d10cd13baff666e3adbc288    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_60ffa4580d10cd13baff666e3adbc288
Id:6111fbc76d4c4809d8769fb7e72a302e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_6111fbc76d4c4809d8769fb7e72a302e
Id:616186dd8b1bf8c57ef33af490c2deb1    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_616186dd8b1bf8c57ef33af490c2deb1
Id:63fd7f27bb926c2ca67f335814692cd2    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_63fd7f27bb926c2ca67f335814692cd2
Id:64ad33f560796ff905b87076178f4bf8    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_64ad33f560796ff905b87076178f4bf8
Id:6524779fda62e0cba091136bb5e5fc99    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:heketidbstorage
Id:6676f414b1209be154652712fd00813f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_6676f414b1209be154652712fd00813f
Id:67fbfbc80133f356dcfb71e937db46f4    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_67fbfbc80133f356dcfb71e937db46f4
Id:6828cc3c5f573a905cb87609b39460ad    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_6828cc3c5f573a905cb87609b39460ad
Id:691320db45424f53c9b2f1a8c3fe4f14    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_691320db45424f53c9b2f1a8c3fe4f14
Id:6942d8873ca7e13f6e02bbd251177e9a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_6942d8873ca7e13f6e02bbd251177e9a
Id:69bab9a9c59ae392998a51c4dcc1eeb9    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_69bab9a9c59ae392998a51c4dcc1eeb9
Id:6ae39691834fc802a9683d899378f817    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_6ae39691834fc802a9683d899378f817
Id:6ef3355c5f20b23f42d0aa632cbf74c3    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_6ef3355c5f20b23f42d0aa632cbf74c3
Id:707f7ba8e765472dd2644fe2a7b6ce3e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_707f7ba8e765472dd2644fe2a7b6ce3e
Id:7815d8b0491ca6cad3ea1436d42a3064    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7815d8b0491ca6cad3ea1436d42a3064
Id:7d0ad546381c2c825cdc4f5f7635c67a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7d0ad546381c2c825cdc4f5f7635c67a
Id:7dbee9bbfa0288296acb07946ca0b3f7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7dbee9bbfa0288296acb07946ca0b3f7
Id:7dd5d3baa366d72c0b6dae145ff7e9eb    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7dd5d3baa366d72c0b6dae145ff7e9eb
Id:7e67c85cdf8312779ae9165b52756d6d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7e67c85cdf8312779ae9165b52756d6d
Id:7ec9afb72be4449da652c22c07f87559    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7ec9afb72be4449da652c22c07f87559
Id:7ef94226b70169c4586942cc78bb7d37    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7ef94226b70169c4586942cc78bb7d37
Id:7f8b8d93b2cac82b49d36153cce3092b    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_7f8b8d93b2cac82b49d36153cce3092b
Id:83091098e18ea5be6d8574365a2e8d01    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_83091098e18ea5be6d8574365a2e8d01
Id:8703441237e70220e8135c4b68f3d63e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_8703441237e70220e8135c4b68f3d63e
Id:883c36b4fccc51dc9a9e1968c4c2f23f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_883c36b4fccc51dc9a9e1968c4c2f23f
Id:896e7a025e181895c3fb5581043f253b    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_896e7a025e181895c3fb5581043f253b
Id:8aff03e1bb98febbf25652661690db1d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_8aff03e1bb98febbf25652661690db1d
Id:8b8549cf08ec5a1b6cdd40a9d6fcdc5d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_8b8549cf08ec5a1b6cdd40a9d6fcdc5d
Id:8d08a2d470e66f347c007a4dfdb03a76    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_8d08a2d470e66f347c007a4dfdb03a76
Id:8e8d4d9bbb0c623876471e6ef84c3d0d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_8e8d4d9bbb0c623876471e6ef84c3d0d
Id:8f3238e9f6010819633c4f1cb63b656a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_8f3238e9f6010819633c4f1cb63b656a
Id:90b864ab74d994413c10244c6aa01a8d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_90b864ab74d994413c10244c6aa01a8d
Id:90cffb116e8d107f9143a54a4c3797e9    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_90cffb116e8d107f9143a54a4c3797e9
Id:9208511225057b9290eeb2b797ecba3b    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_9208511225057b9290eeb2b797ecba3b
Id:94ed8ae41075a11e7506990d8680b1c9    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_94ed8ae41075a11e7506990d8680b1c9
Id:98a5e605531dbcdf6a7b7ed0b66d61dd    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_98a5e605531dbcdf6a7b7ed0b66d61dd
Id:99e2defaaa17aee91674bc48e2c1d431    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_99e2defaaa17aee91674bc48e2c1d431
Id:9a1ffca29e8572c73a7677b2b9ccaf0e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_9a1ffca29e8572c73a7677b2b9ccaf0e
Id:9b0e7dd684bb4a812eca85b11c67ddef    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_9b0e7dd684bb4a812eca85b11c67ddef
Id:9b80905349b4da636387bab337a72418    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_9b80905349b4da636387bab337a72418
Id:9bad00e13a1ed507df9b8e6474af9d6b    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_9bad00e13a1ed507df9b8e6474af9d6b
Id:9d9fa7ce45996f006fea24bcf9a9d940    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_9d9fa7ce45996f006fea24bcf9a9d940
Id:a130cc61af76a00dad821e3873b69f6f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a130cc61af76a00dad821e3873b69f6f
Id:a43c56755e9ea8de26d015731b2017bf    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a43c56755e9ea8de26d015731b2017bf
Id:a4a8d289046a242549a6d908afebeab3    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a4a8d289046a242549a6d908afebeab3
Id:a553cb7f5bf64d9844939e8a67b1f4d3    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a553cb7f5bf64d9844939e8a67b1f4d3
Id:a61b0c1a534c4afca67e904c1330b09f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a61b0c1a534c4afca67e904c1330b09f
Id:a904d5d8d499df02737b0a9e0537322e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a904d5d8d499df02737b0a9e0537322e
Id:a93c8a6f1e628a0d0ece5b9a028bf95c    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a93c8a6f1e628a0d0ece5b9a028bf95c
Id:a979653c3abd833c2a34d129be8343ab    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a979653c3abd833c2a34d129be8343ab
Id:a9f22b420e9adec09433990fdb43e1e4    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_a9f22b420e9adec09433990fdb43e1e4
Id:ab1edca022262b10df2c951dc4080f0f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_ab1edca022262b10df2c951dc4080f0f
Id:ab7f40cab8126c63364569a19630b80a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_ab7f40cab8126c63364569a19630b80a
Id:ac454955ee2e2dc0d7e32cca851abb93    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_ac454955ee2e2dc0d7e32cca851abb93
Id:aca474cc019a4a9bf606392cb52ea70e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_aca474cc019a4a9bf606392cb52ea70e
Id:acb928ddc5c032235d7ca025c1a2d27f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_acb928ddc5c032235d7ca025c1a2d27f
Id:aec695b18c5ba27fda5042e14be7973d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_aec695b18c5ba27fda5042e14be7973d
Id:b18a11aecaf669823210b6bb2a13ac3f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b18a11aecaf669823210b6bb2a13ac3f
Id:b2b5fee204876ad451e3639107d58672    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b2b5fee204876ad451e3639107d58672
Id:b31dd51dca55fa9592039e3f898e3ecf    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b31dd51dca55fa9592039e3f898e3ecf
Id:b3efdb37f50067c771a2299731e609bd    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b3efdb37f50067c771a2299731e609bd
Id:b4a0b1239d00dc58a6aa18a696ea08f7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b4a0b1239d00dc58a6aa18a696ea08f7
Id:b4d9a2ed0f926a2ea320ca97b39de320    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b4d9a2ed0f926a2ea320ca97b39de320
Id:b6e2f5aeb6beaceef0f24e213082434b    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b6e2f5aeb6beaceef0f24e213082434b
Id:b89db8157053d0a19ee735c49677dc62    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b89db8157053d0a19ee735c49677dc62
Id:b8a9e4908a6f436bc1842fa3d8faefba    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b8a9e4908a6f436bc1842fa3d8faefba
Id:b98024f1445b1af8a7fbdbeccca02872    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_b98024f1445b1af8a7fbdbeccca02872
Id:bfcc7504891976a93f69469a6ff1faf7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_bfcc7504891976a93f69469a6ff1faf7
Id:c3f1747c76506f396d88eb23308ec117    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_c3f1747c76506f396d88eb23308ec117
Id:c44b63a0ed53dd0d82ede25df7b595c4    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_c44b63a0ed53dd0d82ede25df7b595c4
Id:c5afca19bcee79c0a8483c4e87e19812    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_c5afca19bcee79c0a8483c4e87e19812
Id:c6a3a36b18000b553acfbb5612c31ba7    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_c6a3a36b18000b553acfbb5612c31ba7
Id:c6cf89e500cee6e9e6f153d95169a060    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_c6cf89e500cee6e9e6f153d95169a060
Id:c6ddc24ed3a0a125b137a7757a36939d    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_c6ddc24ed3a0a125b137a7757a36939d
Id:c86a23f2a0f3d98df8b3d46b1c1b3065    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_c86a23f2a0f3d98df8b3d46b1c1b3065
Id:cab7c74b160c515da7f46372d5016bdb    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_cab7c74b160c515da7f46372d5016bdb
Id:cabab5aa0f6f67501430ea36b8519f29    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_cabab5aa0f6f67501430ea36b8519f29
Id:cc320bf02a01fcd5d1090e019cdfe3ee    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_cc320bf02a01fcd5d1090e019cdfe3ee
Id:d1336469a5657ef50c9f7d38920136e0    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d1336469a5657ef50c9f7d38920136e0
Id:d1e81d80b1bb7fd9b3a9cc4b58e8c6b0    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d1e81d80b1bb7fd9b3a9cc4b58e8c6b0
Id:d50ab95069e60f66594196738b7802f9    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d50ab95069e60f66594196738b7802f9
Id:d552d049a65a21bac7948d22a499623f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d552d049a65a21bac7948d22a499623f
Id:d71c861083a3dccf3590e58d8f57c367    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d71c861083a3dccf3590e58d8f57c367
Id:d78e58b129abe081cef9083047ac5a87    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d78e58b129abe081cef9083047ac5a87
Id:d8c61e41fc07cd9aaa202ae29600e297    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d8c61e41fc07cd9aaa202ae29600e297
Id:d91f345189405b79e1c5c73de6b0ba37    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d91f345189405b79e1c5c73de6b0ba37
Id:d9c17f9213d29dbc793b45ba0b23dc05    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_d9c17f9213d29dbc793b45ba0b23dc05
Id:dcc2b931a92a955f1a10c7754d219429    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_dcc2b931a92a955f1a10c7754d219429
Id:dd87f688d044d18abf856738bb9ef919    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_dd87f688d044d18abf856738bb9ef919
Id:dd884740a59d79a721357654779460d5    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_dd884740a59d79a721357654779460d5
Id:e3a2164e7b79e2ea88bcd8fe6f739b96    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_e3a2164e7b79e2ea88bcd8fe6f739b96
Id:e66133081bc497cf6efb4ccc4e2f1404    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_e66133081bc497cf6efb4ccc4e2f1404
Id:e7270e8f46e6f561707bad7ea6ceee67    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_e7270e8f46e6f561707bad7ea6ceee67
Id:e8cbf35cd534406a97712e4186cbec6a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_e8cbf35cd534406a97712e4186cbec6a
Id:e9668a55644cb52a41c402765c2fd30e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_e9668a55644cb52a41c402765c2fd30e
Id:e97f62e7ee46ba50a489f3ce9f1f4795    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_e97f62e7ee46ba50a489f3ce9f1f4795
Id:eaaf5aaf857abf75f848d5a7f8b4dad2    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_eaaf5aaf857abf75f848d5a7f8b4dad2
Id:ebf4bfab95757f984b1aed27e65fd9a8    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_ebf4bfab95757f984b1aed27e65fd9a8
Id:ed0653dfd55b418ea679bcf7e96e1317    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_ed0653dfd55b418ea679bcf7e96e1317
Id:ef5336cd2ad72949e18c1c8644147d18    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_ef5336cd2ad72949e18c1c8644147d18
Id:f0f9cf5a0b0b2219fa80424ff9be7f5f    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_f0f9cf5a0b0b2219fa80424ff9be7f5f
Id:f1fd2894d23074b0e2ff3f0089bce46e    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_f1fd2894d23074b0e2ff3f0089bce46e
Id:f8a58d1d9a54f1c09aa8733a28d2c9cc    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_f8a58d1d9a54f1c09aa8733a28d2c9cc
Id:f8d88cc179de1daef0da62ee871776f9    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_f8d88cc179de1daef0da62ee871776f9
Id:f94c8c518ca9f3de5f99e2fc06ba81b2    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_f94c8c518ca9f3de5f99e2fc06ba81b2
Id:fa8729390e4e66ea39a1dc4113de6bbb    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_fa8729390e4e66ea39a1dc4113de6bbb
Id:fbc72e87d7d1686136314c091fdd32d6    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_fbc72e87d7d1686136314c091fdd32d6
Id:fd09ea50254b4d1a37e907faf47d193b    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_fd09ea50254b4d1a37e907faf47d193b
Id:fd3c2af7b72e2ed2df0de628091ff5f8    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_fd3c2af7b72e2ed2df0de628091ff5f8
Id:fe23163f66d23d8f49585e42b16b2b3a    Cluster:a987b668651cd2221fa9a5ee0fb6ce28    Name:vol_fe23163f66d23d8f49585e42b16b2b3a

Comment 24 krishnaram Karthick 2016-11-15 04:30:19 UTC
Created attachment 1220674 [details]
heketi_logs_comment23

Comment 25 Humble Chirammal 2016-11-15 04:39:31 UTC
(In reply to krishnaram Karthick from comment #23)
> (In reply to Humble Chirammal from comment #21)
> > (In reply to krishnaram Karthick from comment #20)
> > > (In reply to Humble Chirammal from comment #19)
> > 
> > > > I failed to interpret the new issue. Can you please explain how are you
> > > > performing this, ie the Steps to reproduce ?
> > > 
> > > This is not a new issue, this is same as I mentioned in c#11. By concurrent
> > > request I mean, trying to create a pvc when an existing pvc request has not
> > > completed yet.
> > 
> > In short, with a sleep >=50 seconds in below loop [1], this scale test (
> > creating many volumes) work without any issue.
> > 
> > [1] for i in {1..10}; do oc create -f claim$i; sleep 60; done
> > 
> > But, in absense of sleep or very negligible sleep value, all the requests
> > are not satisfied.
> > 
> > I will look into the logs and try to find out the RCA. But really
> > appreciated if you can provide the timestamp of issue occurrence.
> 
> Humble, I've recreated the issue once again by noting the timestamp.
> Attaching the heketi logs. Also I've pasted the entire output of 'oc get
> pvc' and 'heketi-cli volume list'
> 
> But I believe, debugging would be easier if you can reproduce the issue
> yourself or have a look at the live system. I'll leave that to you anyways.
> 
Thanks karthick, friday I logged into the system, however heketi was failing, so the update in the bugzilla.

Any way, yesterday I went through the possibilities of this scenario and almost concluded the RCA. Looks like a heketi bug is causing this issue as I reported here. https://bugzilla.redhat.com/show_bug.cgi?id=1395042 

Yes, I am trying to reproduce this issue and also going to look at the logs to prove the same.

Comment 26 krishnaram Karthick 2016-11-15 04:52:37 UTC
(In reply to Michael Adam from comment #22)
> It is not entirely clear to me what the actual bug is:
> 

When multiple pvc requests are made in quick succession, most of the pvc requests does not succeed. In the last try (comment # 23) none of the pvc requests had succeeded.

In addition, when 100 volume requests are made, 173 volumes are created using up the complete storage in the storage pool. This issue should be related and of course undesirable.

> The "Pending" pvc create requests are completely valid.
> If many concurrent requests are placed, some will just take
> much longer to be completed.
> So does the output of  "oc get pvc" change, when you repeat running it
> (repeatedly) after having places the many pvc create requests?
> 

Wait time of 1 hour isn't acceptable.

In comment 23, not even one pvc request had succeeded for 23 minutes. and it is still the same after 1 hr.

NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
claim1    Pending                                      1h
claim10   Pending                                      1h
claim11   Pending                                      1h
claim12   Pending                                      1h
claim13   Pending                                      1h
claim14   Pending                                      1h
claim15   Pending                                      1h
claim16   Pending                                      1h
claim17   Pending                                      1h
claim18   Pending                                      1h
claim19   Pending                                      1h
claim2    Pending                                      1h
claim20   Pending                                      1h
claim21   Pending                                      1h
claim22   Pending                                      1h
claim23   Pending                                      1h
claim24   Pending                                      1h
claim25   Pending                                      1h
claim26   Pending                                      1h
claim27   Pending                                      1h
claim28   Pending                                      1h
claim29   Pending                                      1h
claim3    Pending                                      1h
claim30   Pending                                      1h
claim31   Pending                                      1h
claim32   Pending                                      1h
claim33   Pending                                      1h
claim34   Pending                                      59m
claim35   Pending                                      59m
claim36   Pending                                      59m
claim37   Pending                                      59m
claim38   Pending                                      59m
claim39   Pending                                      59m
claim4    Pending                                      1h
claim40   Pending                                      59m
claim41   Pending                                      59m
claim42   Pending                                      59m
claim43   Pending                                      59m
claim44   Pending                                      59m
claim45   Pending                                      59m
claim46   Pending                                      59m
claim47   Pending                                      59m
claim48   Pending                                      59m
claim49   Pending                                      59m
claim5    Pending                                      1h
claim50   Pending                                      59m
claim51   Pending                                      59m
claim53   Pending                                      58m
claim54   Pending                                      58m
claim55   Pending                                      58m
claim56   Pending                                      58m
claim57   Pending                                      58m
claim58   Pending                                      58m
claim59   Pending                                      58m
claim6    Pending                                      1h
claim60   Pending                                      58m
claim61   Pending                                      58m
claim62   Pending                                      58m
claim63   Pending                                      58m
claim64   Pending                                      58m
claim65   Pending                                      58m
claim66   Pending                                      58m
claim67   Pending                                      58m
claim68   Pending                                      58m
claim69   Pending                                      58m
claim7    Pending                                      1h
claim70   Pending                                      57m
claim71   Pending                                      57m
claim72   Pending                                      57m
claim73   Pending                                      57m
claim74   Pending                                      57m
claim75   Pending                                      57m
claim76   Pending                                      57m
claim77   Pending                                      57m
claim78   Pending                                      57m
claim79   Pending                                      57m
claim8    Pending                                      1h
claim80   Pending                                      57m
claim81   Pending                                      57m
claim82   Pending                                      57m
claim83   Pending                                      57m
claim84   Pending                                      57m
claim85   Pending                                      57m
claim86   Pending                                      57m
claim87   Pending                                      57m
claim88   Pending                                      56m
claim89   Pending                                      56m
claim9    Pending                                      1h
claim90   Pending                                      56m
claim91   Pending                                      56m
claim92   Pending                                      56m
claim93   Pending                                      56m
claim94   Pending                                      56m
claim95   Pending                                      56m
claim96   Pending                                      56m
claim97   Pending                                      56m

> Please also attach full output of "heketi-cli volume list", not just line
> count of its output.
> 
Done. Please refer comment 23
> 
> Thanks - Michael

Comment 27 Humble Chirammal 2016-11-15 07:56:53 UTC
Karthick, as mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1392377#c25, I confirm this issue is caused by https://bugzilla.redhat.com/show_bug.cgi?id=1395042 . 

Here is the analysis, 

*) The provisioner make a request to Heketi to create a volume and heketi creates it.

*) Then provisioner try to find out the cluster and node info for the created volume.

*) Heketi returns <ID not found> error as in https://bugzilla.redhat.com/show_bug.cgi?id=1395042

*) After receiving this error provisioner create another volume.

This issue **wont** pop up if you put a delay in creating volumes, where node info works without issues. Thats why increasing the sleep in the loop does not show this error.

Additional Info#
The provisioner does not delete this problematic volume because, its not able to talk to a reliable server when it comes to this situation or code path, ie for a created volume, the server is not able to return its cluster and nodes.

Comment 28 Anoop 2016-11-15 12:57:47 UTC
We are seeing this issue on multiple setups now. We need to handle concurrent volume provisioning requests else clearly document  that there needs to be certain delay between concurrent volume provisioning requests.

Comment 29 Humble Chirammal 2016-11-15 13:03:31 UTC
(In reply to Anoop from comment #28)
> We are seeing this issue on multiple setups now. We need to handle
> concurrent volume provisioning requests else clearly document  that there
> needs to be certain delay between concurrent volume provisioning requests.

Anoop, the root cause is this bug (https://bugzilla.redhat.com/show_bug.cgi?id=1346621) and its in full attention to fix it asap.

Comment 32 Humble Chirammal 2016-11-18 10:43:14 UTC
Karthick, jfyi, I tried to reproduce this issue with the fix mentioned here https://github.com/heketi/heketi/pull/579  and it was not reproducible. Any way, I will wait for your verification and proceed accordingly.

Comment 34 krishnaram Karthick 2016-12-06 04:48:53 UTC
Verified the fix in the following build.

# heketi-cli -v
heketi-cli 3.1.0

# openshift version
openshift v3.4.0.32+d349492
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

steps followed to verify,
1. create 50 pv claim request with a gap of 3 seconds between each claim
2. wait for 50 pvc to be created

50 PVs got created successfully.

Comment 35 errata-xmlrpc 2017-01-18 21:56:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0148.html


Note You need to log in before you can comment on or make changes to this bug.