Bug 1656002 - File pvc is in pending state with warning showing failed to allocate new volume: no space
Summary: File pvc is in pending state with warning showing failed to allocate new volu...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: ocs-3.11
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: ---
Assignee: John Mulligan
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-04 12:40 UTC by Sri Vignesh Selvan
Modified: 2019-04-17 06:52 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-17 06:52:33 UTC
Embargoed:


Attachments (Terms of Use)

Description Sri Vignesh Selvan 2018-12-04 12:40:38 UTC
Description of problem:
=======================
Created pvc with glusterfs storage class. file pvc is in pending state showing,failed to create volume: Failed to allocate new volume: No space left


Version-Release number of selected component (if applicable):
=============================================================
oc v3.11.43
kubernetes v1.11.0+d4cacc0
glusterfs-fuse-3.12.2-27.el7rhgs.x86_64
python2-gluster-3.12.2-27.el7rhgs.x86_64
glusterfs-server-3.12.2-27.el7rhgs.x86_64
gluster-block-0.2.1-29.el7rhgs.x86_64
glusterfs-api-3.12.2-27.el7rhgs.x86_64
glusterfs-cli-3.12.2-27.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-27.el7rhgs.x86_64
glusterfs-libs-3.12.2-27.el7rhgs.x86_64
glusterfs-3.12.2-27.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-27.el7rhgs.x86_64

How reproducible:
=================
2/2


Steps to Reproduce:
===================
1. create a glusterfs storage class with secrets
2. create a file pvc for glusterfs storage class created.
3. wait for pvc to be in bound state.
4. oc describe pvc <pvc-name> check pvc is created successfully.

Actual results:
===============
pvc in pending state with warning.

Expected results:
=================
pvc should be created and should be in bound state.

Additional info:
================

# oc get sc
NAME                       PROVISIONER                AGE
block-sc                   gluster.org/glusterblock   1h
glusterfs-registry         kubernetes.io/glusterfs    5d
glusterfs-registry-block   gluster.org/glusterblock   5d
glusterfs-sc               kubernetes.io/glusterfs    1h
glusterfs-storage          kubernetes.io/glusterfs    5d
glusterfs-storage-block    gluster.org/glusterblock   5d


# oc describe sc glusterfs-sc
Name:                  glusterfs-sc
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           kubernetes.io/glusterfs
Parameters:            resturl=http://172.31.151.173:8080,restuser=admin,secretName=heketi-storage-admin-secret,secretNamespace=glusterfs,volumenameprefix=vol,volumeoptions=user.heketi.arbiter true,user.heketi.average-file-size 64
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

# oc get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim1    Pending                                       glusterfs-sc   1h
test      Pending                                       glusterfs-sc   20m


# oc describe pvc claim1
Name:          claim1
Namespace:     glusterfs
StorageClass:  glusterfs-sc
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-class=glusterfs-sc
               volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type     Reason              Age                From                         Message
  ----     ------              ----               ----                         -------
  Warning  ProvisioningFailed  10s (x3 over 30s)  persistentvolume-controller  Failed to provision volume with StorageClass "glusterfs-sc": failed to create volume: failed to create volume: Failed to allocate new volume: No space

Comment 3 Sri Vignesh Selvan 2018-12-05 06:39:49 UTC
Update with more info:
======================
There are more than required space is left to create volumes. Heketi logs also showing same with no spaces left to create volumes and Minimum brick size limit reached. Out of space. heketi logs are also included in comment #2.

[root@dhcp46-228 old]# heketi-cli topology info

Cluster Id: effc8ab5e79e03fb1543098d93ea7bea

    File:  true
    Block: true

    Volumes:

	Name: heketidbstorage
	Size: 2
	Id: 453880ba24c2879e5e817d5c345ac660
	Cluster Id: effc8ab5e79e03fb1543098d93ea7bea
	Mount: 10.70.47.12:heketidbstorage
	Mount Options: backup-volfile-servers=10.70.46.27,10.70.46.158
	Durability Type: replicate
	Replica: 3
	Snapshot: Disabled

		Bricks:
			Id: 894cbecaf66db81da681da7caf6bd061
			Path: /var/lib/heketi/mounts/vg_28228b7da510131e1b2ec3576863582a/brick_894cbecaf66db81da681da7caf6bd061/brick
			Size (GiB): 2
			Node: 886a1089a8319a4ffaaceef3dddb8dd7
			Device: 28228b7da510131e1b2ec3576863582a

			Id: d572794607144cee8a319129854a4b67
			Path: /var/lib/heketi/mounts/vg_a92afdf0b15e04465a2dc867d9bfec55/brick_d572794607144cee8a319129854a4b67/brick
			Size (GiB): 2
			Node: 09cc5466f83d2c7ce2dc6b07ba15cbee
			Device: a92afdf0b15e04465a2dc867d9bfec55

			Id: db51deea7488b1c4235f3ea60f4a88d4
			Path: /var/lib/heketi/mounts/vg_67f8da200e4905ce51b14976fa24ce00/brick_db51deea7488b1c4235f3ea60f4a88d4/brick
			Size (GiB): 2
			Node: 70e1f03a2dfdd06bc0cf3f84ce0887b5
			Device: 67f8da200e4905ce51b14976fa24ce00


	Name: vol_5d80ba3d7a3da6a7730c6f65e91152de
	Size: 100
	Id: 5d80ba3d7a3da6a7730c6f65e91152de
	Cluster Id: effc8ab5e79e03fb1543098d93ea7bea
	Mount: 10.70.47.12:vol_5d80ba3d7a3da6a7730c6f65e91152de
	Mount Options: backup-volfile-servers=10.70.46.27,10.70.46.158
	Durability Type: replicate
	Replica: 3
	Snapshot: Disabled

		Bricks:
			Id: 085fe7c88da7419c628b8f8451eaf1cc
			Path: /var/lib/heketi/mounts/vg_a92afdf0b15e04465a2dc867d9bfec55/brick_085fe7c88da7419c628b8f8451eaf1cc/brick
			Size (GiB): 100
			Node: 09cc5466f83d2c7ce2dc6b07ba15cbee
			Device: a92afdf0b15e04465a2dc867d9bfec55

			Id: 88511dca74c75f88907a1cd392522429
			Path: /var/lib/heketi/mounts/vg_28228b7da510131e1b2ec3576863582a/brick_88511dca74c75f88907a1cd392522429/brick
			Size (GiB): 100
			Node: 886a1089a8319a4ffaaceef3dddb8dd7
			Device: 28228b7da510131e1b2ec3576863582a

			Id: f233d3f05104e21f4b2d909c3844f532
			Path: /var/lib/heketi/mounts/vg_67f8da200e4905ce51b14976fa24ce00/brick_f233d3f05104e21f4b2d909c3844f532/brick
			Size (GiB): 100
			Node: 70e1f03a2dfdd06bc0cf3f84ce0887b5
			Device: 67f8da200e4905ce51b14976fa24ce00


    Nodes:

	Node Id: 09cc5466f83d2c7ce2dc6b07ba15cbee
	State: online
	Cluster Id: effc8ab5e79e03fb1543098d93ea7bea
	Zone: 2
	Management Hostnames: dhcp47-12.lab.eng.blr.redhat.com
	Storage Hostnames: 10.70.47.12
	Devices:
		Id:a92afdf0b15e04465a2dc867d9bfec55   Name:/dev/sdd            State:online    Size (GiB):2047    Used (GiB):102     Free (GiB):1945    
			Bricks:
				Id:085fe7c88da7419c628b8f8451eaf1cc   Size (GiB):100     Path: /var/lib/heketi/mounts/vg_a92afdf0b15e04465a2dc867d9bfec55/brick_085fe7c88da7419c628b8f8451eaf1cc/brick
				Id:d572794607144cee8a319129854a4b67   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_a92afdf0b15e04465a2dc867d9bfec55/brick_d572794607144cee8a319129854a4b67/brick

	Node Id: 70e1f03a2dfdd06bc0cf3f84ce0887b5
	State: online
	Cluster Id: effc8ab5e79e03fb1543098d93ea7bea
	Zone: 1
	Management Hostnames: dhcp46-27.lab.eng.blr.redhat.com
	Storage Hostnames: 10.70.46.27
	Devices:
		Id:67f8da200e4905ce51b14976fa24ce00   Name:/dev/sdd            State:online    Size (GiB):2047    Used (GiB):102     Free (GiB):1945    
			Bricks:
				Id:db51deea7488b1c4235f3ea60f4a88d4   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_67f8da200e4905ce51b14976fa24ce00/brick_db51deea7488b1c4235f3ea60f4a88d4/brick
				Id:f233d3f05104e21f4b2d909c3844f532   Size (GiB):100     Path: /var/lib/heketi/mounts/vg_67f8da200e4905ce51b14976fa24ce00/brick_f233d3f05104e21f4b2d909c3844f532/brick

	Node Id: 886a1089a8319a4ffaaceef3dddb8dd7
	State: online
	Cluster Id: effc8ab5e79e03fb1543098d93ea7bea
	Zone: 3
	Management Hostnames: dhcp46-158.lab.eng.blr.redhat.com
	Storage Hostnames: 10.70.46.158
	Devices:
		Id:28228b7da510131e1b2ec3576863582a   Name:/dev/sdd            State:online    Size (GiB):2047    Used (GiB):102     Free (GiB):1945    
			Bricks:
				Id:88511dca74c75f88907a1cd392522429   Size (GiB):100     Path: /var/lib/heketi/mounts/vg_28228b7da510131e1b2ec3576863582a/brick_88511dca74c75f88907a1cd392522429/brick
				Id:894cbecaf66db81da681da7caf6bd061   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_28228b7da510131e1b2ec3576863582a/brick_894cbecaf66db81da681da7caf6bd061/brick


[root@dhcp46-228 ~]# oc rsh glusterfs-storage-7bp7j
sh-4.2# df -h
Filesystem                                                                              Size  Used Avail Use% Mounted on
overlay                                                                                  60G   23G   38G  38% /
/dev/sdb1                                                                                60G   23G   38G  38% /run
devtmpfs                                                                                 16G     0   16G   0% /dev
shm                                                                                      64M     0   64M   0% /dev/shm
/dev/mapper/rhel_dhcp47--42-root                                                         50G  2.4G   48G   5% /etc/ssl
tmpfs                                                                                    16G  2.8M   16G   1% /run/lvm
tmpfs                                                                                    16G     0   16G   0% /sys/fs/cgroup
tmpfs                                                                                    16G   16K   16G   1% /run/secrets/kubernetes.io/serviceaccount
/dev/mapper/vg_67f8da200e4905ce51b14976fa24ce00-brick_db51deea7488b1c4235f3ea60f4a88d4  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_67f8da200e4905ce51b14976fa24ce00/brick_db51deea7488b1c4235f3ea60f4a88d4
/dev/mapper/vg_67f8da200e4905ce51b14976fa24ce00-brick_f233d3f05104e21f4b2d909c3844f532  100G   42G   59G  42% /var/lib/heketi/mounts/vg_67f8da200e4905ce51b14976fa24ce00/brick_f233d3f05104e21f4b2d909c3844f532
sh-4.2# exit
exit
[root@dhcp46-228 ~]# oc rsh glusterfs-storage-hmrm7
sh-4.2# df -h
Filesystem                                                                              Size  Used Avail Use% Mounted on
overlay                                                                                  60G   28G   33G  47% /
devtmpfs                                                                                 16G     0   16G   0% /dev
shm                                                                                      64M     0   64M   0% /dev/shm
/dev/sdb1                                                                                60G   28G   33G  47% /run
tmpfs                                                                                    16G  2.8M   16G   1% /run/lvm
/dev/mapper/rhel_dhcp47--42-root                                                         50G  2.0G   48G   4% /etc/ssl
tmpfs                                                                                    16G     0   16G   0% /sys/fs/cgroup
tmpfs                                                                                    16G   16K   16G   1% /run/secrets/kubernetes.io/serviceaccount
/dev/mapper/vg_a92afdf0b15e04465a2dc867d9bfec55-brick_d572794607144cee8a319129854a4b67  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_a92afdf0b15e04465a2dc867d9bfec55/brick_d572794607144cee8a319129854a4b67
/dev/mapper/vg_a92afdf0b15e04465a2dc867d9bfec55-brick_085fe7c88da7419c628b8f8451eaf1cc  100G   42G   59G  42% /var/lib/heketi/mounts/vg_a92afdf0b15e04465a2dc867d9bfec55/brick_085fe7c88da7419c628b8f8451eaf1cc
sh-4.2# exit
exit
[root@dhcp46-228 ~]# oc rsh glusterfs-storage-ppdv5
sh-4.2# df -h
Filesystem                                                                              Size  Used Avail Use% Mounted on
overlay                                                                                  60G   25G   36G  41% /
devtmpfs                                                                                 16G     0   16G   0% /dev
shm                                                                                      64M     0   64M   0% /dev/shm
/dev/sdb1                                                                                60G   25G   36G  41% /run
/dev/mapper/rhel_dhcp47--42-root                                                         50G  2.0G   48G   4% /etc/ssl
tmpfs                                                                                    16G  2.9M   16G   1% /run/lvm
tmpfs                                                                                    16G     0   16G   0% /sys/fs/cgroup
tmpfs                                                                                    16G   16K   16G   1% /run/secrets/kubernetes.io/serviceaccount
/dev/mapper/vg_28228b7da510131e1b2ec3576863582a-brick_894cbecaf66db81da681da7caf6bd061  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_28228b7da510131e1b2ec3576863582a/brick_894cbecaf66db81da681da7caf6bd061
/dev/mapper/vg_28228b7da510131e1b2ec3576863582a-brick_88511dca74c75f88907a1cd392522429  100G   42G   59G  42% /var/lib/heketi/mounts/vg_28228b7da510131e1b2ec3576863582a/brick_88511dca74c75f88907a1cd392522429


Note You need to log in before you can comment on or make changes to this bug.