Created attachment 1546951 [details] Wireshark report of the network overhead Description of problem: Installed Ubuntu Server 18.10 from official release. Installed debian packages from ppa:gluster/glusterfs-5 (download.gluster.org) The problem was originally reproduced on CentOs based on with custom compilation of glusterfs-5.1 Created the simplest replica filesystem on 2 nodes and mounted on one of the nodes but it is also reproducible when mounted from dedicated client as well. Created a 0 length file with "touch /tmp/0length" Copied the file to gluster mount. Network capture shows that 128KByte packet is sent to both replica which is a huge network overhead for really small files. the commands I've used: mkdir /mnt/brick gluster peer probe 192.168.56.100 gluster volume create gv0 replica 2 192.168.56.100:/mnt/brick 192.168.56.101:/mnt/brick force gluster volume start gv0 mkdir /mnt/gv0 mount -t glusterfs 192.168.56.100:/gv0 /mnt/gv0 touch /tmp/0length cp /tmp/0length /mnt/gv0 Version-Release number of selected component (if applicable): 5.5-ubuntu1~cosmic1: glusterfs-client glusterfs-common glusterfs-server How reproducible: Steps to Reproduce: 1) on both nodes: mkdir /mnt/brick 2) on 192.168.56.101: gluster peer probe 192.168.56.100 3) gluster volume create gv0 replica 2 192.168.56.100:/mnt/brick 192.168.56.101:/mnt/brick force 4) gluster volume start gv0 5) mkdir /mnt/gv0 6) mount -t glusterfs 192.168.56.100:/gv0 /mnt/gv0 7) touch /tmp/0length 8) tcpdump -s 0 -w /tmp/glusterfs.pkt 'host 192.168.56.100' 9) cp /tmp/0length /mnt/gv0 10) stop network capture and load into wireshark Actual results: The network capture is 256KByte length and I see 1-1 'proc-27' calls to both glusterfs nodes with 128KB sizes. Expected results: The network capture is a few kilobytes and low network overhead for small files. Additional info:
Very strange for larger sizes: If I use 7 KByte file I get 2*128 KB packets. If I use 15, 31 KByte file I get 2*128 KB + 1*32KB packets. If I use 63, 127 or 129 KByte file I get 6*128KB packets.
I've tested this with 5.11 and 7.0 and I've not observed this issue. All packets are smaller than 1 KiB except the write request, which matches the expected size. Can you provide some more information and share the packet capture ?
BTW, proc 27 is a LOOKUP request. It seems weird that it uses 128 KiB for the request packet.
@Otto, can you provide some more information about this problem ?
I'm not observing this behavior and I don't have more data to investigate, so I'm closing the bug. If you see that the problem persists and can provide requested data, feel free to reopen it to continue investigating.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days