Hide Forgot
Description of problem: We're mounting an NFS export from a client using RDMA as the protocol over a direct HCA to HCA cable between two Mellanox MT26428 (QDR/40Gbps) Infiniband cards. Doing something simple like: --- dd if=/dev/zero of=test bs=1024k count=10240 from the client results in this message on the server: --- svcrdma: Error fast registering memory for xprt ffff8801a2a51c00 and these messages on the client (over several different runs of dd): --- rpcrdma: connection to 192.168.1.1:2050 on mlx4_0, memreg 5 slots 32 ird 16 rpcrdma: connection to 192.168.1.1:2050 on mlx4_0, memreg 5 slots 32 ird 16 rpcrdma: connection to 192.168.1.1:2050 closed (-103) rpcrdma: connection to 192.168.1.1:2050 on mlx4_0, memreg 5 slots 32 ird 16 rpcrdma: connection to 192.168.1.1:2050 closed (-103) I don't see a kernel oops as mentioned at: --- https://access.redhat.com/knowledge/solutions/55816#comment-360433 but the operation does take a much longer time to complete than it should and hangs intermittently throughout the writing of the file, seemingly doing nothing. So something is unhappy and disconnecting here. Version-Release number of selected component (if applicable): 2.6.32-220.13.1.el6.x86_64 on both client and server. Let me know if you need package versions for all the related Mellanox, Infiniband, and RDMA packages. But it's all the latest of those packages in RHEL 6.2. How reproducible: Every time the dd command is run with a large enough file size (10GB seems sufficient to make it happen every time). Steps to Reproduce: 1. listed above Actual results: File is written with huge timeouts and aggregate bandwidth is horribly slow. Expected results: File is written with no timeouts and bandwidth is at least approaching local disk speed.
Since RHEL 6.3 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
(In reply to Mark Nipper from comment #0) > > Actual results: > File is written with huge timeouts and aggregate bandwidth is horribly slow. > > Expected results: > File is written with no timeouts and bandwidth is at least approaching local > disk speed. NFSoRDMA had been updated to upstream v4.1 for RHEL-6.8. The performance issue is gone. [root@rdma-dev-00 nfsordma]$ grep -i distro /etc/motd DISTRO=RHEL-6.8 [root@rdma-dev-00 nfsordma]$ rpm -q kernel kernel-2.6.32-642.el6.x86_64 [root@rdma-dev-00 nfsordma]$ mount| grep rdma ib0-dev-01:/home on /nfsordma type nfs (rw,rdma,port=2050,addr=172.31.0.31) [root@rdma-dev-00 nfsordma]$ ibstat CA 'mlx4_0' CA type: MT4099 Number of ports: 2 Firmware version: 2.36.5150 Hardware version: 0 Node GUID: 0x0002c90300317b10 System image GUID: 0x0002c90300317b13 Port 1: State: Active Physical state: LinkUp Rate: 56 Base lid: 8 LMC: 1 SM lid: 2 Capability mask: 0x02514868 Port GUID: 0x0002c90300317b11 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 40 Base lid: 8 LMC: 1 SM lid: 2 Capability mask: 0x02514868 Port GUID: 0x0002c90300317b12 Link layer: InfiniBand [root@rdma-dev-00 nfsordma]$ [root@rdma-dev-00 nfsordma]$ time dd if=/dev/zero of=test bs=1024k count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 64.7979 s, 166 MB/s real 1m4.813s user 0m0.004s sys 0m5.061s [root@rdma-dev-00 nfsordma]$ time dd if=/dev/zero of=test bs=1024k count=40240 40240+0 records in 40240+0 records out 42194698240 bytes (42 GB) copied, 290.3 s, 145 MB/s real 4m52.467s user 0m0.010s sys 0m20.068s