Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1432351

Summary: nfs reporting "no space left on device"
Product: [Community] GlusterFS Reporter: daniel de baerdemaeker <debaerd>
Component: nfsAssignee: Kaleb KEITHLEY <kkeithle>
Status: CLOSED EOL QA Contact:
Severity: low Docs Contact:
Priority: unspecified    
Version: 3.10CC: bugs
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
nfs.log from /var/log/glusterfs none

Description daniel de baerdemaeker 2017-03-15 08:21:55 UTC
Created attachment 1263217 [details]
nfs.log from /var/log/glusterfs

Description of problem:
At night we are backing up our vmware environment to gluster storage using ghetto-vcb
Sometimes the backup fails and in the nfs.log i get the folowing 
[2017-03-15 00:18:19.744077] W [MSGID: 112199] [nfs3-helpers.c:3494:nfs3_log_write_res] 0-nfs-nfsv3: /backup-vmware/VM-XDMGR1/VM-XDMGR1-2017-03-14_18-00-01/VM-XDMGR1-flat.vmdk => (XID: 93730b92, WRITE: NFS: 28(No space left on device), POSIX: 28(No space left on device)), count: 0, STABLE,wverf: 1489485939 [Invalid argument]

hereby the output of df
[root@backup1 ~]# df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/rhel_backup1-root       50G   36G   15G  71% /
devtmpfs                            16G     0   16G   0% /dev
tmpfs                               16G   96K   16G   1% /dev/shm
tmpfs                               16G   74M   16G   1% /run
tmpfs                               16G     0   16G   0% /sys/fs/cgroup
/dev/sdb4                          497M  262M  235M  53% /boot
/dev/sdb1                          200M  9.5M  191M   5% /boot/efi
/dev/mapper/vg_cluster-lv_cluster   39T   32T  7.0T  82% /mnt/data1
/dev/mapper/rhel_backup1-home      211G   88G  123G  42% /localhome
bu1:diskpools                       39T   34T  4.9T  88% /mnt/diskpools
bu1:gluvol0                         39T   34T  4.9T  88% /gluvol0
bu1:worm                            39T   34T  4.9T  88% /mnt/worm
bu1:gluvol0                         39T   34T  4.9T  88% /mnt/tsm
bu1:tsminst                         39T   34T  4.9T  88% /mnt/tsminst
tmpfs                              3.2G   16K  3.2G   1% /run/user/42
tmpfs                              3.2G     0  3.2G   0% /run/user/1001
tmpfs                              3.2G     0  3.2G   0% /run/user/0
bu1:vmware1                         39T   32T  7.0T  82% /backup1
/dev/sda1                          5.5T  5.0T  502G  92% /tsmdevices/1000114157

the error is produced on the bu:vmware1 where you can see there is plenty o space


Version-Release number of selected component (if applicable):
glusterfs-server-3.10.0-1.el7.x86_64
glusterfs-ganesha-3.10.0-1.el7.x86_64
pcp-pmda-gluster-3.11.3-4.el7.x86_64
nfs-ganesha-gluster-2.4.1-1.el7.x86_64
glusterfs-client-xlators-3.10.0-1.el7.x86_64
samba-vfs-glusterfs-4.2.11-2.el7.x86_64
glusterfs-cli-3.10.0-1.el7.x86_64
glusterfs-libs-3.10.0-1.el7.x86_64
glusterfs-geo-replication-3.10.0-1.el7.x86_64
glusterfs-fuse-3.10.0-1.el7.x86_64
glusterfs-3.10.0-1.el7.x86_64
glusterfs-api-3.10.0-1.el7.x86_64
python2-gluster-3.10.0-1.el7.x86_64

i also had id on 3.9

How reproducible:
do not know, i think it has something to do with an integer overflow

Steps to Reproduce:
1.
2.
3.

Actual results:
error in the backup

Expected results:


Additional info:

Comment 1 Shyamsundar 2018-06-20 18:28:16 UTC
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.

Comment 2 Shyamsundar 2018-06-20 18:28:17 UTC
This bug reported is against a version of Gluster that is no longer maintained
(or has been EOL'd). See https://www.gluster.org/release-schedule/ for the
versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline
gluster repository, request that it be reopened and the Version field be marked
appropriately.