Bug 1330555 - file sizes showing as zero on issuing ls on an nfs mount after adding brick to convert a single brick to replica volume(1x2)
Summary: file sizes showing as zero on issuing ls on an nfs mount after adding brick t...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-26 12:49 UTC by Nag Pavan Chilakam
Modified: 2019-04-03 09:28 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-16 18:18:15 UTC
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2016-04-26 12:49:38 UTC
Description of problem:
=========================
As part of bug verification of bug#1248998, I ran the following case to check the fix works on nfs mount. FOllowing was what i observed:
I created files of say 10mb on nfs mount for a single brick volume, and then add a brick with rep 2 to convert this vol to 1x2.
Now, the issue with the bug mentioned above was that the user would find all the files created missing once add brick is done(due to reverse heal)
While this works now, however the file size on the nfs mount shows as size as zero, when we issue ls -l or ll.
However if i repeat this command , then it shows the right size.

Assuming that this could be due to some caching of old attributes, I created another mount point before adding brick(but didn't cd into that mount point), and cd'ed only after add brick, even in that case i see the issue.


====>before add brick
[root@dhcp35-103 nov]# ll
total 30000
-rw-r--r--. 1 root root 10240000 Apr 26 17:43 dd1
-rw-r--r--. 1 root root 10240000 Apr 26 17:43 dd2
-rw-r--r--. 1 root root 10240000 Apr 26 17:43 dd3


===>after add brick
[root@dhcp35-103 mnt]# ll /mnt/nov-2
total 0
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd1
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd2
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd3
[root@dhcp35-103 mnt]# ll /mnt/nov-2
total 0
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd1
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd2
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd3
[root@dhcp35-103 mnt]# ll /mnt/nov-2
total 0
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd1
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd2
-rw-r--r--. 1 root root 0 Apr 26 17:45 dd3
[root@dhcp35-103 mnt]# ll /mnt/nov-2
total 30000
-rw-r--r--. 1 root root 10240000 Apr 26 17:43 dd1
-rw-r--r--. 1 root root 10240000 Apr 26 17:43 dd2
-rw-r--r--. 1 root root 10240000 Apr 26 17:43 dd3
[root@dhcp35-103 mnt]# 


Version-Release number of selected component (if applicable):
==================
3.7.9-2
[root@dhcp35-191 feb]# rpm -qa|grep gluster
glusterfs-client-xlators-3.7.9-2.el7rhgs.x86_64
glusterfs-server-3.7.9-2.el7rhgs.x86_64
python-gluster-3.7.5-19.el7rhgs.noarch
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
glusterfs-3.7.9-2.el7rhgs.x86_64
glusterfs-api-3.7.9-2.el7rhgs.x86_64
glusterfs-cli-3.7.9-2.el7rhgs.x86_64
glusterfs-geo-replication-3.7.9-2.el7rhgs.x86_64
gluster-nagios-common-0.2.3-1.el7rhgs.noarch
glusterfs-libs-3.7.9-2.el7rhgs.x86_64
glusterfs-fuse-3.7.9-2.el7rhgs.x86_64
glusterfs-rdma-3.7.9-2.el7rhgs.x86_64



Steps to Reproduce:
==================
1.create a single brick volume
2.mount on nfs and create say 3 files of 10mb each
3.now add a brick with rep 2 to convert above vol to 1x2
4. Now either or new mnt point or existing mountpoint issue ll or ls -l

It can be seen that the file size is shown as zero
On repeated ll or ls -l, the files get healed and the correct size is shown

NFS log:
[2016-04-26 12:15:49.206540] I [MSGID: 108026] [afr-self-heal-common.c:770:afr_log_selfhe        al] 0-nov-replicate-0: Completed data selfheal on ff8af5a1-30ed-47dd-875a-aa3199e9a02d. s        ources=[0]  sinks=1
  21168 [2016-04-26 12:15:49.228916] I [MSGID: 108026] [afr-self-heal-common.c:770:afr_log_selfhe        al] 0-nov-replicate-0: Completed data selfheal on a9eb4ff8-211b-4295-9d2e-23753e8d8a11. s        ources=[0]  sinks=1
  21169 [2016-04-26 12:15:49.230252] I [MSGID: 108026] [afr-self-heal-metadata.c:56:__afr_selfhea        l_metadata_do] 0-nov-replicate-0: performing metadata selfheal on ff8af5a1-30ed-47dd-875a        -aa3199e9a02d


Attached is nfs  serrver log completely. check last few lines


Note You need to log in before you can comment on or make changes to this bug.