Description of problem: ======================= I have created a tiered volume, where the hot tier is full and hence new writes were failing. I then deleted all the files from the mount using rm -rf. Now when I try to create files (zerobyte), it fails saying no space on disk [root@mia diskfull]# touch f{1..10} touch: cannot touch ‘f1’: No space left on device touch: cannot touch ‘f2’: No space left on device fuse client logs: =============== [2015-11-02 06:34:28.438578] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 9922003: /f8 => -1 (No space left on device) [2015-11-02 06:34:28.452211] E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-7: remote operation failed. Path: /f9 [No space left on device] [2015-11-02 06:34:28.452347] E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-6: remote operation failed. Path: /f9 [No space left on device] [2015-11-02 06:34:28.452745] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 9922006: /f9 => -1 (No space left on device) [2015-11-02 06:34:28.468526] E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-5: remote operation failed. Path: /f10 [No space left on device] [2015-11-02 06:34:28.468536] E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-4: remote operation failed. Path: /f10 [No space left on device] [2015-11-02 06:34:28.469070] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 9922009: /f10 => -1 (No space left on device) [2015-11-02 06:34:28.331038] E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-4: remote operation failed. Path: /f1 [No space left on device] [2015-11-02 06:34:28.331057] E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-5: remote operation failed. Path: /f1 [No space left on device] [2015-11-02 06:34:28.349900] E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-5: remote operation failed. Path: /f2 [No space left on device] TIer logs: ========== ################################################ [2015-11-02 13:46:00.962757] E [MSGID: 109037] [tier.c:463:tier_migrate_using_query_file] 0-diskfull-tier-dht: ERROR in current lookup [2015-11-02 13:46:00.962932] E [MSGID: 109037] [tier.c:1488:tier_start] 0-diskfull-tier-dht: Demotion failed [2015-11-02 13:48:00.981438] E [MSGID: 109037] [tier.c:463:tier_migrate_using_query_file] 0-diskfull-tier-dht: ERROR in current lookup [2015-11-02 13:48:00.981573] E [MSGID: 109037] [tier.c:1488:tier_start] 0-diskfull-tier-dht: Demotion failed [2015-11-02 13:50:01.000564] E [MSGID: 109037] [tier.c:463:tier_migrate_using_query_file] 0-diskfull-tier-dht: ERROR in current lookup [2015-11-02 13:50:01.000704] E [MSGID: 109037] [tier.c:1488:tier_start] 0-diskfull-tier-dht: Demotion failed Version-Release number of selected component (if applicable): =========================================================== glusterfs-server-3.7.5-5.el7rhgs.x86_64 Steps to Reproduce: ================= 1.create tier volume with limited hot brick sizes 2.create files such that they exceed hot brick sizes and fail to create after sometime. 3.now create some other random files 4.Now create more files such that it fails saying no space 5. Now delete files using rm -rf and retry creating zero byte files 6. It says no space Additional info:
backend bricks [root@zod ~]# ll /*/bri*/dis*/ /dummy/brick106/diskfull_hot/: total 4 ---------T. 2 root root 0 Nov 2 18:08 iceman /dummy/brick107/diskfull_hot/: total 0 /rhs/brick1/diskfull/: total 0 /rhs/brick2/diskfull/: total 0 ---------T. 2 root root 0 Nov 2 17:19 FnF7.mkv [root@zod ~]# [root@zod ~]# [root@zod ~]# [root@zod ~]# [root@zod ~]# ls /
sosreports@ below location. Refer volume "diskfull" [nchilaka@rhsqe-repo bug.1277088]$ pwd /home/repo/sosreports/nchilaka/bug.1277088
Assigning this to Joseph as the patch he posted for BZ#1291566 should fix this as well. http://review.gluster.org/12969
https://code.engineering.redhat.com/gerrit/#/c/64284/
As tier is not being actively developed, I'm closing this bug. Feel free to open it if necessary.