Bug 1558964

Summary: 0kb files showing in glusterfs client
Product: [Community] GlusterFS Reporter: Abhishek <abhi68868>
Component: coreAssignee: bugs <bugs>
Status: CLOSED WONTFIX QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: abhi68868, bugs, nbalacha
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Other   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-01 06:09:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Abhishek 2018-03-21 12:20:05 UTC
Description of problem:
I am using Distributed-Replicate

Number of Bricks: 2 x (2 + 1) = 6


Version-Release number of selected component (if applicable):
glusterfs 3.11.3

How reproducible:


Steps to Reproduce:
1.Deleting 0kb files from backed but after 2 or 3 days it occurs. 
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Abhishek 2018-03-22 03:39:44 UTC
Please guide how to resolve the 0kb files in glusterfs3.11

Comment 2 Nithya Balachandran 2018-10-05 13:53:14 UTC
(In reply to Abhishek from comment #1)
> Please guide how to resolve the 0kb files in glusterfs3.11

Please provide details as to what it is you are doing and what issue you are seeing.

Comment 3 Abhishek 2018-10-05 17:38:22 UTC
I am using gluster 3.12 Distributed-Replicate with six physical server

Number of Bricks: 2 x (2 + 1) = 6

0 Kb file showing in client side.

Comment 4 Nithya Balachandran 2018-10-08 02:44:05 UTC
(In reply to Abhishek from comment #3)
> I am using gluster 3.12 Distributed-Replicate with six physical server
> 
> Number of Bricks: 2 x (2 + 1) = 6
> 
> 0 Kb file showing in client side.

This is not sufficient information to determine the problem or debug it.

What do you mean by 0Kb files and why do you think they should not be seen? Please provide the output and the expected behaviour.

Why are you deleting files from the backend?
Please provide gluster volume information and clients being used.

Comment 5 Abhishek 2018-10-08 08:10:10 UTC
In glusterfs volume sometime got 0kb file and sometime it give proper file size,
when 0kb file comes i am unable to open file after refreshing two three times got proper file size then it shown all content.

Ex--

-rw-r--r-- 1 apache apache  129620 May  2 15:01 lic premium 10.pdf
-rw-r--r-- 1 apache apache 2202530 Mar 23  2017 LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mail.pdf
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mail.pdf
drwxr-xr-x 2 apache apache    4096 Jul  8 05:07 mediclam-icard
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mobile_account_statement.pdf
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mobile_account_statement.pdf


GLUSTER Configuration

Type: Distributed-Replicate
Volume ID: b12bb7dd-1c9e-40cc-9586-793cd179bad0
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: R3-Gluster01:/export/sda5/brick
Brick2: R4-Gluster01:/export/sda5/brick
Brick3: R4-Gluster02:/export/sda5/brick (arbiter)
Brick4: R3-Gluster02:/export/sda5/brick
Brick5: R4-Gluster03:/export/sda5/brick
Brick6: R3-Gluster03:/export/sda5/brick (arbiter)

Comment 6 Nithya Balachandran 2018-10-08 08:48:09 UTC
Ex--

-rw-r--r-- 1 apache apache  129620 May  2 15:01 lic premium 10.pdf
-rw-r--r-- 1 apache apache 2202530 Mar 23  2017
LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mail.pdf
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mail.pdf
drwxr-xr-x 2 apache apache    4096 Jul  8 05:07 mediclam-icard
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mobile_account_statement.pdf
-rw-r--r-- 1 apache apache       0 Jan  1  2018 mobile_account_statement.pdf


Looks like your T files are being counted as data files. 

Have you done anything directly on the bricks instead of going through the client?


Please check for these files directly on every brick and let me know the ls -l output for them on the bricks.

Comment 7 Nithya Balachandran 2018-10-08 08:52:15 UTC
Also, how full are your bricks?

Comment 8 Nithya Balachandran 2018-10-08 08:53:39 UTC
(In reply to Abhishek from comment #1)
> Please guide how to resolve the 0kb files in glusterfs3.11

JFYI, Gluster 3.11 is EOL now.

Comment 9 Abhishek 2018-10-08 09:30:17 UTC
R3-Gluster01-
Disk space 73T   22T   52T  29% /gluster-vol

rw-r--r--  2 48 48    4827 Oct  9  2016 jio dongle bill.pdf
-rw-r--r--  2 48 48 2202530 Mar 23  2017 LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r--  2 48 48  165035 Jun 30  2016 mail.pdf
drwxr-xr-x  2 48 48      58 Jun 28 10:52 mediclam-icard/
-rw-r--r--  2 48 48   99564 Jun 21  2016 mobile_account_statement.pdf


R3-Gluster02
Disk space 37T  6.2T   31T  17% /export/sda5

---------T  2 48 48       0 Sep 18  2017 jio dongle bill.pdf
-rw-r--r--  2 48 48  129620 May  2 15:01 lic premium 10.pdf
---------T  2 48 48       0 Sep 18  2017 LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r--  2 48 48       0 Jan  1  2018 mail.pdf
drwxr-xr-x  2 48 48     106 Jun 28 10:53 mediclam-icard/
-rw-r--r--  2 48 48       0 Jan  1  2018 mobile_account_statement.pdf



R3-Gluster03
Disk space 37T  115G   37T   1% /export/sda5

---------T 2 48 48    0 Sep 18  2017 jio dongle bill.pdf
-rw-r--r-- 2 48 48    0 May  2 15:01 lic premium 10.pdf
---------T 2 48 48    0 Sep 18  2017 LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r-- 2 48 48    0 Jan  1  2018 mail.pdf
drwxr-xr-x 2 48 48  154 Aug  3 12:46 mediclam-icard
-rw-r--r-- 2 48 48    0 Jan  1  2018 mobile_account_statement.pdf



R4-Gluster01
Disk space 37T   15T   22T  41% /export/sda5

-rw-r--r--  2 48 48    4827 Oct  9  2016 jio dongle bill.pdf
-rw-r--r--  2 48 48 2202530 Mar 23  2017 LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r--  2 48 48  165035 Jun 30  2016 mail.pdf
drwxr-xr-x  2 48 48      58 Jun 28 10:52 mediclam-icard/
-rw-r--r--  2 48 48   99564 Jun 21  2016 mobile_account_statement.pdf


R4-Gluster02
Disk space 37T  122G   37T   1% /export/sda5

-rw-r--r--  2 48 48    0 Oct  9  2016 jio dongle bill.pdf
-rw-r--r--  2 48 48    0 Mar 23  2017 LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r--  2 48 48    0 Jun 30  2016 mail.pdf
drwxr-xr-x  2 48 48  154 Jul  8 05:07 mediclam-icard/
-rw-r--r--  2 48 48    0 Jun 21  2016 mobile_account_statement.pdf


R4-Gluster03
Disk space 37T  6.2T   31T  17% /export/sda5

---------T  2 48 48       0 Oct  9  2016 jio dongle bill.pdf
-rw-r--r--  2 48 48  129620 May  2 15:01 lic premium 10.pdf
---------T  2 48 48       0 Mar 23  2017 LICs-Anmol-Jeevan-2-09062016.pdf
-rw-r--r--  2 48 48       0 Jun 30  2016 mail.pdf
drwxr-xr-x  2 48 48     154 Jul  8 05:07 mediclam-icard/
-rw-r--r--  2 48 48       0 Jun 21  2016 mobile_account_statement.pdf

Comment 10 Nithya Balachandran 2018-10-08 11:11:09 UTC
Please also provide the output of

getfattr -e hex -m . -d <path to T file on the brick>

for all the files that show up as 0 byte files.

Are you using a Fuse client to access the volume?

Comment 11 Abhishek 2018-10-09 06:36:12 UTC
Yes i am using a Fuse client to access the volume


root@R3-Gluster01:/export/sda5/brick# getfattr -e hex -m . -d LICs-Anmol-Jeevan-2-09062016.pdf
# file: LICs-Anmol-Jeevan-2-09062016.pdf
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.lockerdata-client-2=0x000000000000000000000000
trusted.gfid=0xf819a17b48cc46b796e8912346477688

root@R3-Gluster01:/export/sda5/brick# getfattr -e hex -m . -d mail.pdf
# file: mail.pdf
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.lockerdata-client-2=0x000000000000000000000000
trusted.gfid=0x28a1dd430f934c718e38ca80a89eb087
-----------------
root@R3-Gluster02:/export/sda5/brick# getfattr -e hex -m . -d LICs-Anmol-Jeevan-2-09062016.pdf
# file: LICs-Anmol-Jeevan-2-09062016.pdf
trusted.gfid=0xf819a17b48cc46b796e8912346477688
trusted.glusterfs.dht.linkto=0x6c6f636b6572646174612d7265706c69636174652d3000

root@R3-Gluster02:/export/sda5/brick# getfattr -e hex -m . -d mail.pdf
# file: mail.pdf
trusted.gfid=0x28a1dd430f934c718e38ca80a89eb087


------------
root@R3-Gluster03:/export/sda5/brick# getfattr -e hex -m . -d LICs-Anmol-Jeevan-2-09062016.pdf
# file: LICs-Anmol-Jeevan-2-09062016.pdf
trusted.gfid=0xf819a17b48cc46b796e8912346477688
trusted.glusterfs.dht.linkto=0x6c6f636b6572646174612d7265706c69636174652d3000

root@DL-R3-Gluster03:/export/sda5/brick# getfattr -e hex -m . -d mail.pdf
# file: mail.pdf
trusted.gfid=0x28a1dd430f934c718e38ca80a89eb087

---------------
root@R4-Gluster01:/export/sda5/brick# getfattr -e hex -m . -d LICs-Anmol-Jeevan-2-09062016.pdf
# file: LICs-Anmol-Jeevan-2-09062016.pdf
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.lockerdata-client-2=0x000000000000000000000000
trusted.gfid=0xf819a17b48cc46b796e8912346477688

root@R4-Gluster01:/export/sda5/brick# getfattr -e hex -m . -d mail.pdf
# file: mail.pdf
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.lockerdata-client-2=0x000000000000000000000000
trusted.gfid=0x28a1dd430f934c718e38ca80a89eb087
-----------------------

root@R4-Gluster02:/export/sda5/brick# getfattr -e hex -m . -d LICs-Anmol-Jeevan-2-09062016.pdf
# file: LICs-Anmol-Jeevan-2-09062016.pdf
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0xf819a17b48cc46b796e8912346477688

root@DL-R4-Gluster02:/export/sda5/brick# getfattr -e hex -m . -d mail.pdf
# file: mail.pdf
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0x28a1dd430f934c718e38ca80a89eb087
----------------

root@R4-Gluster03:/export/sda5/brick# getfattr -e hex -m . -d LICs-Anmol-Jeevan-2-09062016.pdf
# file: LICs-Anmol-Jeevan-2-09062016.pdf
trusted.gfid=0xf819a17b48cc46b796e8912346477688
trusted.glusterfs.dht.linkto=0x6c6f636b6572646174612d7265706c69636174652d3000

root@R4-Gluster03:/export/sda5/brick# getfattr -e hex -m . -d mail.pdf
# file: mail.pdf
trusted.afr.lockerdata-client-3=0x000000000000000000000000
trusted.afr.lockerdata-client-5=0x000000000000000000000000
trusted.gfid=0x28a1dd430f934c718e38ca80a89eb087

Comment 12 Shyamsundar 2018-10-23 14:54:04 UTC
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.

Comment 13 Abhishek 2018-10-25 05:14:08 UTC
I am unable to find .gfid folder in bricks or mount point.

Comment 14 Yaniv Kaul 2019-07-01 06:09:57 UTC
Closing old bugs - please re-open if relevant.