Bug 1290036 - [RFE] Slow `ls` performance and directory listing within Samba
Summary: [RFE] Slow `ls` performance and directory listing within Samba
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: samba
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Michael Adam
QA Contact: Vivek Das
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-09 14:25 UTC by Marcel Hergaarden
Modified: 2018-04-10 10:49 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 10:49:58 UTC
Embargoed:


Attachments (Terms of Use)

Description Marcel Hergaarden 2015-12-09 14:25:51 UTC
Description of problem:
The ls command takes a big amount of time to show results within a large Gluster volume. The same goes for a directory listing within a SMB share.
This is due to the nature of GlusterFS.

Version-Release number of selected component (if applicable):
3.1

How reproducible:
Setup a large Gluster volume, using at least 4 nodes.
Mount the glustervolume, put data into it and run the `ls` cmd on the mountpoint.

Steps to Reproduce:
1. Setup Red Hat Gluster Storage using at least 4 nodes.
2. Create a distributed-replicated volume and mount the volume
3. Put data into the Glustervol and then run the `ls` cmd

Actual results:
Slow response, results take a long time. Much more time when compared to a local filesystem.

Expected results:
Normal response to the `ls` command

Additional info:
Would there perhaps be an option to make use of the Glusterfind mechanism, or a ocal db file or some other mechanism to query the glusterfind db or info to speed up the ls findings significantly ?  Similar for SMB/

Comment 2 Raghavendra G 2015-12-11 05:16:57 UTC
Hi Marcel,

Is it possible to give your volume info? I want to know how many bricks were there in the volume. Your description says its 4 node brick, but there can be multiple bricks from same node and hence we cannot deduce how many bricks were there.

volume information can be found using:

#gluster volume info <your-volume-name>

Comment 3 Raghavendra G 2015-12-11 05:56:00 UTC
I need some more information too.

1. Can you confirm whether ls is unaliased? The reason I am asking this question is that commonly ls is aliased to include options (like ls --color) which will result in stat being done on each directory entry. However plain ls will result in just readdir. So, we will be able to figure out whether the performance hit is in readdir or stat or some other syscall. It would be helpful if you can give us some numbers on plain ls (readdir) performance.

2. What data is present on the mount point? How many files/directories are present? Are directories nested? If yes, how are they nested - are they nested deeply in a vertical fashion (/a/b/c/d/e/f etc) or are they nested in horizontal fashion (/a/b, /a/c, /a/d, /a/e, /a/f etc).

Comment 4 Marcel Hergaarden 2015-12-15 12:20:56 UTC
Hi Raghavendra,

I don't have a production setup available by myself, this is actually what I get as a feedback from customers and administrators. So I cannot give additional technical info at this moment, other then the fact that `ls` and directory listings by SMB are showing poor performances in common.

It is known that this is a kind of common behaviour for Gluster (BZ1117833).
So while this cannot be changed easily, my suggestion here is to see or check from a technical perspective if it would perhaps be possible to make some smart use of the Glusterfind information, to query that instead of the Gluster filesystem by itself.

Normally, the Glusterfind info is maintained when changes on the GlusterFS occur. Therefor I was wondering if it would perhaps be possible to query against the Glusterfind DB instead of performing an FS crawl when a directory listing is requested as a suggestion.

So, I'm not asking to dive into the current situation but suggest for a possible enhancement by this BZ.

I hope this explains the context somewhat more :-)


Note You need to log in before you can comment on or make changes to this bug.