Bug 1914536

Summary: ftp: Binary file upload should stop upon socket write error.
Product: Red Hat Enterprise Linux 7 Reporter: Tetsuo Handa <penguin-kernel>
Component: ftpAssignee: Michal Ruprich <mruprich>
Status: CLOSED WONTFIX QA Contact: rhel-cs-infra-services-qe <rhel-cs-infra-services-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.9CC: penguin-kernel
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-07-09 07:53:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Patch to fix this problem. none

Description Tetsuo Handa 2021-01-09 14:58:04 UTC
Description of problem:

Binary file upload is unexpectedly continuing file read operation when socket
write operation failed by a temporary error, due to "while() for() break;" trap.

As a result, not only the server side receives a file which is smaller than
the client side has, but also it makes impossible to resume binary file upload
based on how far the server side has received.



Version-Release number of selected component (if applicable):

ftp-0.17-67.el7.x86_64



Additional info:

I don't have a reproducer, but you can try this ftp client built with
the following error injection patch applied.

----------
--- a/ftp/ftp.c
+++ b/ftp/ftp.c
@@ -722,9 +722,15 @@
 		errno = d = 0;
 		while ((c = read(fileno(fin), buf, sizeof (buf))) > 0) {
 			bytes += c;
-			for (bufp = buf; c > 0; c -= d, bufp += d)
+			for (bufp = buf; c > 0; c -= d, bufp += d) {
+				if (bytes == sizeof(buf) * 2) {
+					d = -1;
+					errno = ENOMEM;
+					break;
+				}
 				if ((d = write(fileno(dout), bufp, c)) <= 0)
 					break;
+			}
 			if (hash) {
 				while (bytes >= hashbytes) {        /* <-- 'long long' signed overflow is  */
 					(void) putchar('#');        /* possible. In this case, we can      */
----------

Comment 2 Tetsuo Handa 2021-01-09 15:00:38 UTC
Created attachment 1745845 [details]
Patch to fix this problem.

Comment 5 Michal Ruprich 2022-02-14 16:01:45 UTC
Hi Tetsuo,

I can't reproduce this even when trying to forcefully cut off the network on the server side. If I shutdown an interface for a while, ftp can restore the connection afterwards and finish the operation. If I cut the connection with something like tcpkill, ftp ends with '426 Failure reading network stream.'. Do you remember which ftp server you used with this connection?

Regards,
Michal

Comment 6 Michal Ruprich 2022-02-24 11:06:54 UTC
Thank you for taking the time to report this issue to us. We appreciate the feedback and use reports such as this one to guide our efforts at improving our products. That being said, this bug tracking system is not a mechanism for requesting support, and we are not able to guarantee the timeliness or suitability of a resolution.
 
If this issue is critical or in any way time sensitive, please raise a ticket through the regular Red Hat support channels to ensure it receives the proper attention and prioritization to assure a timely resolution.
 
For information on how to contact the Red Hat production support team, please visit:
    https://access.redhat.com/support

Comment 7 Tetsuo Handa 2022-02-24 11:48:09 UTC
I reported this problem because I've heard that a RHEL user was experiencing partial
lack of file data without error messages when uploading via ftp. Though I couldn't
confirm if that user was actually hitting this problem.

I don't think that cutting off the network on the server side is an appropriate
fault injection for testing this problem, for the ftp server is irrelevant.

I don't think that killing TCP connection is an appropriate fault injection for
testing this problem, for killing TCP connection results in a PERMANENT error.

I'm reporting that when this ftp client got a TEMPORARY write() failure,
this ftp client is failing to abort (which causes partial lack of file data
if subsequent write() succeeds). You can reproduce this problem using a
fault injection patch in "Additional info:" which emulates a TEMPORARY
write() failure.

Then, please see the attached patch in comment 2. You will find that
"while ((c = read(fileno(fin), buf, sizeof (buf))) > 0) { ... }" is failing to
abort when write() in this loop got a TEMPORARY failure.

Comment 9 RHEL Program Management 2022-07-09 07:53:52 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.