Bug 1359408

Summary: [RGW:NFS]:- stat output for the "size" and "blocks" always shown as 0 for rgw buckets (directories) from nfs mount point
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: shylesh <shmohan>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: low Docs Contact: Bara Ancincova <bancinco>
Priority: low    
Version: 2.0CC: anharris, cbodley, ceph-eng-bugs, flucifre, hnallurv, kbader, kdreyer, mbenjamin, sweil, uboppana
Target Milestone: z1   
Target Release: 3.3   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
.NFS Ganesha does not show bucket size or number of blocks NFS Ganesha, the NFS interface of the Ceph Object Gateway, lists buckets as directories. However, the interface always shows that the directory size and the number of blocks is `0`, even if some data is written to the buckets.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-02-03 04:26:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1322504, 1383917, 1412948, 1494421    

Description shylesh 2016-07-23 20:04:59 UTC
Below listed are rge buckets shown as directories on nfs mount point.

stat always show size and blocks as 0 though it has data 




  File: ‘missed-hellonew-another-newad-bucket-creation’
  Size: 0               Blocks: 0          IO Block: 1048576 directory
Device: 27h/39d Inode: 7556023705472190679  Links: 3
Access: (0777/drwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:nfs_t:s0
Access: 1970-01-01 00:00:00.000000000 +0000
Modify: 1970-01-01 00:00:00.000000000 +0000
Change: 1970-01-01 00:00:00.000000000 +0000
 Birth: -
  File: ‘nfsbucket’
  Size: 0               Blocks: 0          IO Block: 1048576 directory
Device: 27h/39d Inode: 8564301539577355726  Links: 3
Access: (0777/drwxrwxrwx)  Uid: (4294967294/ UNKNOWN)   Gid: (4294967294/ UNKNOWN)
Context: system_u:object_r:nfs_t:s0
Access: 1970-01-22 02:03:28.000000000 +0000
Modify: 2016-07-23 13:11:23.000000000 +0000
Change: 2016-07-23 13:11:23.000000000 +0000
 Birth: -
  File: ‘test’
  Size: 0               Blocks: 0          IO Block: 1048576 directory
Device: 27h/39d Inode: 16759487093327469849  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (4294967294/ UNKNOWN)   Gid: (4294967294/ UNKNOWN)
Context: system_u:object_r:nfs_t:s0
Access: 2016-07-23 19:55:45.000000000 +0000
Modify: 2016-07-23 19:55:45.000000000 +0000
Change: 2016-07-23 19:55:45.000000000 +0000
 Birth: -

Comment 2 Matt Benjamin (redhat) 2016-07-25 15:29:16 UTC
The RGW NFS implementation currently does not make any attempt to map bucket loading to directory size.

The implementation does not attempt to calculate or estimate link counts for ordinary directories, either.

Either of these might have updated behavior in 2.1, but may always have divergent behavior from ordinary NFS.  RGW NFS is not expected to have fully consistent NFS semantics.  We are still exploring what semantics best fit the needs of customers for NFS-S3 integration.

Comment 20 Giridhar Ramaraju 2019-08-05 13:06:51 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 21 Giridhar Ramaraju 2019-08-05 13:09:27 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri