+++ This bug was initially created as a clone of Bug #1062584 +++ Al Viro discovered the following problem in cifs.ko while overhauling some of the VFS layer write code: > Take a look at cifs_iovec_writev(). We have decided that > the next chunk to send should cover nr_pages worth of data, allocated > the pages and > save_len = cur_len; > for (i = 0; i < nr_pages; i++) { > copied = min_t(const size_t, cur_len, PAGE_SIZE); > copied = iov_iter_copy_from_user(wdata->pages[i], &it, > 0, copied); > cur_len -= copied; > iov_iter_advance(&it, copied); > } > cur_len = save_len - cur_len; > tried to copy from iovec into said pages. What happens if iovec spans > an munmapped area? It'll copy as much as it can and from that point on > iov_iter_copy_from_user() will be returning 0. And iov_iter_advance(&it, 0) > will, of course, do nothing. > > So we'll end up with cur_len well less than nr_pages * PAGE_SIZE. Then we > start to populate a structure that will be passed to (async) write request: > wdata->sync_mode = WB_SYNC_ALL; > wdata->nr_pages = nr_pages; > wdata->offset = (__u64)offset; > wdata->cfile = cifsFileInfo_get(open_file); > wdata->pid = pid; > wdata->bytes = cur_len; > wdata->pagesz = PAGE_SIZE; > wdata->tailsz = cur_len - ((nr_pages - 1) * PAGE_SIZE); > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > rc = cifs_uncached_retry_writev(wdata); > > and we have a negative number in wdata->tailsz. I don't have a setup > ready for testing, so this is strictly from RTFS, but AFAICS we'll send > nr_pages-1 full pages out (some - with valid data, some - with whatever > page_alloc() has left there) and then hit the last one. We kmap it and > set an iovec with base at the address we'd mapped it on and length > a bit under 4Gb. Then we call kernel_sendmsg() on that turd. Note that > verify_iovec() isn't called - iovec is already kernel-side anyway and > kernel_sendmsg() assumes it to be valid. > > Again, I hadn't tested it, but it looks like a user-triggerable oops at > the very least. All that is needed is writable file on cifs volume > mounted with cache=strict... He's quite correct (though it's even easier to reproduce if you mount with cache=none). I have a reproducer that can trivially crash the kernel when run as an unprivileged user and a patch that fixes the problem. I'll attach those in a bit. Since fixing this is holding up some important work in other areas, I'm going to propose that we disclose this in one week (February 14th). --- Additional comment from Jeff Layton on 2014-02-07 06:22:36 EST --- This is the reproducer for the bug. Simply mount up a cifs share with 'cache=none', and then run this with the path to a scratch file on the mount as the first argument. --- Additional comment from Jeff Layton on 2014-02-07 06:29:23 EST --- This patch fixes the bug. It should apply fairly cleanly to most recent kernels (including RHEL6/7).
Fixed in git.
kernel-3.12.11-201.fc19 has been submitted as an update for Fedora 19. https://admin.fedoraproject.org/updates/kernel-3.12.11-201.fc19
Package kernel-3.12.11-201.fc19: * should fix your issue, * was pushed to the Fedora 19 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing kernel-3.12.11-201.fc19' as soon as you are able to, then reboot. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2014-2606/kernel-3.12.11-201.fc19 then log in and leave karma (feedback).
kernel-3.12.11-201.fc19 has been pushed to the Fedora 19 stable repository. If problems still persist, please make note of it in this bug report.