Description of problem: block-volume creation fails. [root@dhcp46-207 ~]# oc describe pvc/mongodb-1 Name: mongodb-1 Namespace: storage-project StorageClass: glusterblock Status: Pending Volume: Labels: <none> Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"b68d4059-9c33-11e7-b661-0a580a81020a","leaseDurationSeconds":15,"acquireTime":"2017-09-18T05:41:46Z","renewTime":"2017-09-18T05:41:49Z","lea... volume.beta.kubernetes.io/storage-class=glusterblock volume.beta.kubernetes.io/storage-provisioner=gluster.org/glusterblock Capacity: Access Modes: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 6s 6s 1 gluster.org/glusterblock b68d4059-9c33-11e7-b661-0a580a81020a Normal Provisioning External provisioner is provisioning volume for claim "storage-project/mongodb-1" 5s 5s 1 gluster.org/glusterblock b68d4059-9c33-11e7-b661-0a580a81020a Warning ProvisioningFailed Failed to provision volume with StorageClass "glusterblock": failed to create volume: [heketi] error creating volume insufficient block hosts online 7s 4s 3 persistentvolume-controller Normal ExternalProvisioning cannot find provisioner "gluster.org/glusterblock", expecting that a volume for the claim is provisioned either manually or via external software [sshexec] INFO 2017/09/18 05:42:47 Check Glusterd service status in node 10.70.46.193 [kubeexec] ERROR 2017/09/18 05:42:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:315: Unable to find a GlusterFS pod on host 10.70.46.193 with a label key glusterfs-node [sshexec] ERROR 2017/09/18 05:42:47 /src/github.com/heketi/heketi/executors/sshexec/peer.go:76: Unable to find a GlusterFS pod on host 10.70.46.193 with a label key glusterfs-node [sshexec] INFO 2017/09/18 05:42:47 Check Glusterd service status in node 10.70.46.197 [sshexec] INFO 2017/09/18 05:42:47 Check Glusterd service status in node 10.70.46.203 [kubeexec] ERROR 2017/09/18 05:42:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:315: Unable to find a GlusterFS pod on host 10.70.46.197 with a label key glusterfs-node [sshexec] ERROR 2017/09/18 05:42:47 /src/github.com/heketi/heketi/executors/sshexec/peer.go:76: Unable to find a GlusterFS pod on host 10.70.46.197 with a label key glusterfs-node [kubeexec] ERROR 2017/09/18 05:42:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:315: Unable to find a GlusterFS pod on host 10.70.46.203 with a label key glusterfs-node [sshexec] ERROR 2017/09/18 05:42:48 /src/github.com/heketi/heketi/executors/sshexec/peer.go:76: Unable to find a GlusterFS pod on host 10.70.46.203 with a label key glusterfs-node [heketi] ERROR 2017/09/18 05:42:48 /src/github.com/heketi/heketi/apps/glusterfs/block_volume_entry_create.go:85: insufficient block hosts online [heketi] ERROR 2017/09/18 05:42:48 /src/github.com/heketi/heketi/apps/glusterfs/app_block_volume.go:83: Failed to create block volume: insufficient block hosts online [asynchttp] INFO 2017/09/18 05:42:48 asynchttp.go:129: Completed job a9755f3b5af394e1d962b4f2ece2428c in 169.62382ms Please note that block-volume creation works via heketi-cli Version-Release number of selected component (if applicable): cns-deploy-5.0.0-43.el7rhgs.x86_64 How reproducible: always Steps to Reproduce: 1. setup cns with gluster-block 2. try to provision gluster-block via dynamic provisioning Actual results: block-volume creation fails Expected results: block-volume creation should succeed
This bug would be hit only if ha count is provided in pv claim. To unblock the testing please delete the ha count line and try again. We will have a build with fix for it in a few hours. Already working on it.
(In reply to Raghavendra Talur from comment #3) > This bug would be hit only if ha count is provided in pv claim. To unblock > the testing please delete the ha count line and try again. We will have a > build with fix for it in a few hours. Already working on it. Thanks Talur. That helped.
As the bug need to be fixed at heketi, I am moving the component to 'heketi'.
This is fixed in cns-deploy v45.
verified in build - cns-deploy-5.0.0-46.el7rhgs.x86_64 provisioning works with ha count updated. [root@dhcp47-10 ~]# oc describe sc/glusterblock Name: glusterblock IsDefaultClass: No Annotations: <none> Provisioner: gluster.org/glusterblock Parameters: chapauthenabled=true,hacount=2,opmode=heketi,restauthenabled=false,resturl=http://172.30.192.168:8080,restuser=admin Events: <none> [root@dhcp47-10 ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE test Bound pvc-1d6c1bf7-9d26-11e7-aeb4-00505684d1d7 5Gi RWO glusterblock 17m Moving the bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2879