Bug 1670382 - parallel-readdir prevents directories and files listing
Summary: parallel-readdir prevents directories and files listing
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: gluster-smb
Version: 5
Hardware: x86_64
OS: Linux
low
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: Poornima G
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-29 13:13 UTC by Marcin
Modified: 2019-10-10 02:41 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-10 02:41:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Marcin 2019-01-29 13:13:29 UTC
Description of problem:

It looks like the problem described at the following link still exists:
https://bugzilla.redhat.com/show_bug.cgi?id=1512371

In our case, however, the main client are windows machines. However, even after directly creation of files and directories on the gluster resource, the second cluster host cannot see new files and directories...

The problem does not occur, for example, in version 4.1.7 of the gluster.

After disabling `performance.parallel-readdir`, the problem disappears and everything works correctly.


Version-Release number of selected component (if applicable):

- Ubuntu 16.4.5 LTS x64
- Gluster Versions 5.3
- Gluster Client Versions 5.3


Steps to Reproduce:
1. Enable performance.parallel-readdir on the volume.
2. Mount the volume on a client using the samba protocol.
3. Create a directory or file within the volume.

Expected results:

- The directory and files should show up


Additional info:

Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 8153ffd6-6da3-462d-a1c3-9a23da127a3a
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: test-sn1:/storage/sda2/brick0
Brick2: test-sn2:/storage/sda2/brick0
Brick3: test-sn1:/storage/sda3-2/brick1
Brick4: test-sn2:/storage/sda3/brick1
Brick5: test-sn1:/storage/sda4/brick2
Brick6: test-sn2:/storage/sda4/brick2
Options Reconfigured:
server.statedump-path: /var/log/glusterfs/
nfs.disable: on
transport.address-family: inet
cluster.self-heal-daemon: enable
storage.build-pgfid: off
server.event-threads: 4
client.event-threads: 4
cluster.lookup-optimize: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.nl-cache-timeout: 600
network.inode-lru-limit: 200000
performance.cache-samba-metadata: on
performance.cache-size: 256MB
performance.nl-cache: on
performance.md-cache-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.parallel-readdir: on
cluster.readdir-optimize: on
performance.client-io-threads: on
user.smb: enable
storage.batch-fsync-delay-usec: 0
performance.readdir-ahead: on

Comment 1 Marcin 2019-01-31 11:48:23 UTC
Maybe someone knows some way to get around this problem? (Of course, except disabling it in the gluster configuration)

The parallel feature significantly improves the reading speed from our backup system.

We planned to update glusterfs this month from version 3.10.3 to 5.x due to a significant number of bugs, so I would be grateful for the information.

Comment 2 Nithya Balachandran 2019-02-04 06:27:53 UTC
Can you clarify that you are doing the following:

1. The files/directories are being created from one gluster client (not directly on the bricks)
2. The files/directories cannot be listed from another client which has mounted the same volume
3. Are the files/directories visible on the client from which they were created?

Comment 3 Marcin 2019-02-04 08:48:15 UTC
Hello Nithya,

1. Yes, the files/directory are being created from Windows2012R2 (samba client)
2. No, the files/directories cannot be listed by another client which has mounted the same volume.
3. No, the files/directories aren't visible on the client from that were created. In addition, I can confirm that they aren't visible, even directly on the brick of the host to which they write data... (the solution is, for example, restarting the host).

Comment 4 Marcin 2019-03-05 09:58:47 UTC
Hello Everyone,

Please, let me know if something has been agreed about the problem?

Thanks in advance

Regards
Marcin

Comment 5 Nithya Balachandran 2019-03-05 14:45:42 UTC
Hi Marcin,

Sorry, I did not get a chance to look into this. I will try to get someone else to take a look.



Regards,
Nithya

Comment 6 Poornima G 2019-03-05 16:00:02 UTC
So the files don't appear even on the bricks is it? That's very strange. Did you check on both the servers and all the 6 bricks. We will try to check if the issue is seen even on fuse mount or is it only specific to samba access.Can you try the following steps and let us know the output:
Create two temporary fuse mounts and create a file and directory from one mount point. List it on the same mount, Do you see the file and directory? List it on the other mount too, are the files visible?

Comment 7 Marcin 2019-03-06 08:00:12 UTC
In terms of the visibility of files directly on the bricks of the server, I've described this a bit imprecisely. The files aren't visible at the point of mounting the entire gluster resource at the server OS level - /glusterfs was really mounted by native client and in this way "fuse mount" has been checked as well (there is an entry in a fstab file i.e. /glusterfs and mount type is fuse.glusterfs). Of course, they are visible at the level of individual bricks. I apologize for the inaccuracy.

In my spare time, I'll try to do a bit more thorough tests to show you the result.

I'll be grateful for your commitment.

Regards
Marcin

Comment 8 Marcin 2019-03-07 08:13:49 UTC
Some of the directories created on the host resource (fuse) are not visible from the same host, while the files themselves are usually visible. On the second host that mounts this resource (fuse) directories and files created on the first host are visible. Sometimes, for a moment, directories can appear and disappear, but only on the host where they were created.

Files and directories created from the host level (fuse) to which the samba client is not directly connected are usually visible on the samba resource. However, the directories that were created on the host (fuse) to which Samba connects directly (via ctdb) are partially not visible. Files created on both hosts (fuse) are generally visible on the samba resource.

Most of the new files and directories created directly on the samba resource seem to be hidden from the same client and server (samba) and may be partially invisible on the host they are connected to via samba - (on fusemount). On the second host (fuse) to which the samba client is not connected, files and directories created on the samba resource are generally visible.

The tests have been performed on the latest version of gluster / client (5.4).

Disabling the parallel-readdir functionality immediately solves the above problems, even without an additional restart of hosts or the gluster service.

As I mentioned at the very beginning in (v.4.1.7) and our current production (v.3.10.3) the problem does not occur.

I hope that I haven't mixed up anything :)

Regards
Marcin

Comment 9 joao.bauto 2019-04-04 08:48:42 UTC
So I think I'm hitting this bug also.

I have an 8 brick distributed volume where Windows and Linux clients mount the volume via samba and headless compute servers using gluster native fuse. With parallel-readdir on, if a Windows client creates a new folder, the folder is indeed created but invisible to the Windows client. Accessing the same samba share in a Linux client, the folder is again visible and with normal behaviour. The same folder is also visible when mounting via gluster native fuse.

The Windows client can list existing directories and rename them while, for files, everything seems to be working fine.

Gluster servers: CentOS 7.5 with Gluster 5.3 and Samba 4.8.3-4.el7.0.1 from @fasttrack

Clients tested: Windows 10, Ubuntu 18.10, CentOS 7.5

Volume Name: tank
Type: Distribute
Volume ID: 9582685f-07fa-41fd-b9fc-ebab3a6989cf
Status: Started
Snapshot Count: 0
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: swp-gluster-01:/tank/volume1/brick
Brick2: swp-gluster-02:/tank/volume1/brick
Brick3: swp-gluster-03:/tank/volume1/brick
Brick4: swp-gluster-04:/tank/volume1/brick
Brick5: swp-gluster-01:/tank/volume2/brick
Brick6: swp-gluster-02:/tank/volume2/brick
Brick7: swp-gluster-03:/tank/volume2/brick
Brick8: swp-gluster-04:/tank/volume2/brick
Options Reconfigured:
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
storage.batch-fsync-delay-usec: 0
performance.write-behind-window-size: 32MB
performance.stat-prefetch: on
performance.read-ahead: on
performance.read-ahead-page-count: 16
performance.rda-request-size: 131072
performance.quick-read: on
performance.open-behind: on
performance.nl-cache-timeout: 600
performance.nl-cache: on
performance.io-thread-count: 64
performance.io-cache: off
performance.flush-behind: on
performance.client-io-threads: off
performance.write-behind: off
performance.cache-samba-metadata: on
network.inode-lru-limit: 0
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
cluster.readdir-optimize: on
cluster.lookup-optimize: on
client.event-threads: 4
server.event-threads: 16
features.quota-deem-statfs: on
nfs.disable: on
features.quota: on
features.inode-quota: on
cluster.enable-shared-storage: disable

Cheers

Comment 10 Marcin 2019-05-20 11:04:17 UTC
It seems that in the new version of glusterfs (6.1) the problem no longer occurs, but I haven't seen it on the bug list to this version. Can anyone confirm this, please?

Cheers

Comment 11 joao.bauto 2019-05-20 13:47:51 UTC
I have been running with parallel-readdir on for the past 3 weeks with no issues. No info in change notes...

Comment 12 Nithya Balachandran 2019-10-10 02:41:44 UTC
I don't think we have worked on this bug specifically but changes in the code for other bugs may have also fixed this without our realising it. Unfortunately it is going to be very difficult to figure out which one .
As the issue is no longer seen in the current release, I am closing this BZ. Please let me know if you have any concerns.


Note You need to log in before you can comment on or make changes to this bug.