Bug 1944132

Summary: rsync --timeout doesn't work when target directory is on NFS
Product: Red Hat Enterprise Linux 8 Reporter: Renaud Métrich <rmetrich>
Component: rsyncAssignee: Michal Ruprich <mruprich>
Status: CLOSED WONTFIX QA Contact: rhel-cs-infra-services-qe <rhel-cs-infra-services-qe>
Severity: medium Docs Contact:
Priority: high    
Version: 8.3CC: anrussel, devzero, tkorbar
Target Milestone: rcKeywords: Reproducer, Triaged
Target Release: ---   
Hardware: All   
OS: Linux   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-12-16 10:52:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Description Flags
systemtap script showing last_io_out updates and reads none

Description Renaud Métrich 2021-03-29 11:23:00 UTC
Description of problem:

A customer reported that "rsync --timeout <timeout> ..." doesn't work when target directory is on NFS and the customer cuts the traffic to simulate a NFS issue on the receiver side.

Reproducing myself, I could find out that the customer is right: the code never enters the timeout condition because there is always traffic between the rsync processes (a few bytes, which look like some sort of keep-alive).
These bytes make the "last_io_out" variable always be updated, which prevents check_timeout() to reach the conditional block on line 191+:

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
 547 static char *perform_io(size_t needed, int flags)
 548 {
 810                         if ((n = write(iobuf.out_fd, out->buf + out->pos, len)) <= 0) {
 811                                 if (errno == EINTR || errno == EWOULDBLOCK || errno == EAGAIN)
 812                                         n = 0;    <<<<< HERE in case of non-blocking mode (or EINTR)
 813                                 else {
 821                                 }
 822                         }
 828                         if (io_timeout)
 829                                 last_io_out = time(NULL);        <<<<< HERE
 830                         stats.total_written += n;

 158 static void check_timeout(BOOL allow_keepalive, int keepalive_flags)
 159 {
 189         chk = MAX(last_io_out, last_io_in);
 190         if (t - chk >= io_timeout) {
 191                 if (am_server)        <<<<< HERE BLOCK
 192                         msgs2stderr = 1;
 193                 rprintf(FERROR, "[%s] io timeout after %d seconds -- exiting\n",
 194                         who_am_i(), (int)(t-chk));
 195                 exit_cleanup(RERR_TIMEOUT);
 196         }
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Cherry on the cake, on line 812, we can see that even if write() returned 0, "last_io_out" will be updated, which is non-sense.

Using a systemtap script to check the various "last_io_out" / "last_io_in" values in check_timeout() and "last_io_out" updates in perform_io(), I was able to find out the following trend:

- check_timeout() runs every half-period (hence if timeout = 60, it will run every 30 seconds)
- last_io_out is updated just after check_timeout(), and is always updated

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
# stap -v ./rsync_trace_check_timeout.stp
6967: (check_timeout) t: 1617016659, last_io_out: 1617016644, last_io_in: 1617016644
6967: (perform_io) setting last_io_out: 1617016644
6967: (check_timeout) t: 1617016674, last_io_out: 1617016659, last_io_in: 1617016659
6967: (perform_io) setting last_io_out: 1617016659
6967: (check_timeout) t: 1617016695, last_io_out: 1617016674, last_io_in: 1617016680
6967: (perform_io) setting last_io_out: 1617016674
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

systemtap script in attachment for convenience.

From above, we see that "last_io_out" is immediately updated after check_timeout() executed, hence the timeout code cannot be reached in that condition.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Mount a NFS filesystem as target directory

  # mkdir /nfsmount
  # mount <server>:<share> /nfsmount

2. rsync a large file

  # truncate -s 10G /tmp/zero
  # rsync --timeout=30 --progress=2 /tmp/zero /nfsmount/zero

3. From another terminal, drop the outgoing traffic to NFS server

  # iptables -A OUTPUT -d <serverip>/32 -p tcp -m tcp --dport 2049 -j DROP

Actual results:

rsync traffic stops but timeout is never reached.

Expected results:

timeout being reached.

Comment 1 Renaud Métrich 2021-03-29 11:24:16 UTC
Created attachment 1767314 [details]
systemtap script showing last_io_out updates and reads

Comment 7 Roland Kletzing 2022-09-11 08:15:36 UTC
Also see https://bugzilla.samba.org/show_bug.cgi?id=15163

why is this being silently closed with "wontfix" ?

that way, bugs never get resolved, especially if they there is qualified bugreport and then no action to forward it to upstream project. 

not good....

Comment 8 Michal Ruprich 2022-09-13 07:10:04 UTC
Hi Roland,

this was not silently closed, it was closed based on comment which is private. The reasoning behind this problem is that this bug specifically mentions rsync over NFS. NFS seems like any other local dir for rsync and is completely transparent in the rsync process. If your mount is hard, then it will wait for infinite amount of time and that will force rsync receiver process to hang. My colleague was able to find out that if you mount your remote directory with soft option for example like this:

# mount -o soft,timeo=10,retrans=1 <remote-ip>:/shared /nfsmount

then rsync is able to identify that nfs is unresponsive and fails with IO error.

There are many issues with the --timeout option in rsync, none of those seem to be getting ahead. Your bug seems to be a bit different than this one. If you feel like this should be addressed in RHEL, feel free to file a new bug, but there is not much I can do without the Upstream developer taking a look first.

Hope this helps.


Comment 9 Roland Kletzing 2022-09-13 10:17:23 UTC
thank you for explanation and clearing up things. it was not transparent to us that there was a private comment, so sorry for the noise.

you are right, if local rsync gets stuck in a kernel syscall, there is no way to interrupt it or stop the process.

>There are many issues with the --timeout option in rsync, none of those seem to be getting ahead.

oh, really?  i did not see many in rsync bugzilla. maybe you can point us to some of these?

>If you feel like this should be addressed in RHEL, feel free to file a new bug

no , i just wanted to make sure that non-upstream bugs not getting silently closed. thank you!

Comment 10 Michal Ruprich 2022-09-13 10:57:03 UTC
No problem, more of the bugs are on github here:


Not entirely sure which are actually related or not though.

Comment 11 Roland Kletzing 2022-09-13 11:02:09 UTC
ouch, i wasn't aware that there is github issue tracker being used for rsync bugs, thanks for the pointer.

Comment 12 Red Hat Bugzilla 2023-09-15 01:33:29 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days