Created attachment 644607 [details]
kernel log from 3.6 kernel (CIFS VFS: SMB response too short")
Description of problem:
Cant read files from samba shares mounted with 'cache=none' option.
Version-Release number of selected component (if applicable):
Fedora 17. All 3.5.x - 3.6.x vanilla kernels.
Steps to Reproduce:
1. Create smb share *(see additional info for details)
2. Create a file on this share, for example using dd:
dd if=/dev/urandom of=/smb_dir_path/dd_file bs=1024 count=1024
3. mount this share with 'cache=none' options from Fedora 17 or from any system that runs kernel starting from 3.5.0 version.
mount -t cifs //host/share /mount_point/ -o cache=none
4. try to read this file in the below way (note block size value):
dd if=/mount_point/dd_file of=/dev/null bs=131013 count=1
dd will never finish.
dmesg is constantly spammed by messages "CIFS VFS: SMB response too short"
read should succeed.
* smb shares controlled by Windows OS are not affected (they can be successfully read when mounted with cache=none), smb shares controlled by Centos 5 with samba 3.0.33 seems to be not affected also.
The problem is 100% reproducible with samba shares controlled by Centos 6 (samba-3.5.4-68, kernel 2.6.32), latest Fedora (16,17), latest ubuntu. So the problem could also be related to version of samba running on the samba server.
Mounting with 'cache=loose' workarounds the issue.
The problem does not exist with 3.4 and earlier kernels.
The problem was introduced by 'cifs: convert cifs_iovec_read to use async reads' patch (http://www.spinics.net/lists/linux-cifs/msg05802.html).
I've reverted this patch and everything is working fine again.
Attached you can find kernel logs created with cifsFYI enabled from unsuccessful read with 3.6.2 (messages.bad) kernel and from the same, but successful operation, with 3.4.5 kernel (messages.good).
Created attachment 644608 [details]
successful read of the same file using 3.4 kernel (share mounted with directio)
Almost certainly the same issue as reported here:
Which turned out to be a server bug: