Red Hat Bugzilla – Bug 764306
Fh resolution redirects IO to old file blocks despite old file being replaced by a same name file
Last modified: 2011-05-25 04:48:21 EDT
A corner case has always been present in the file handle resolution code that is causing problems for a customer who is manipulating the backend directly for the purpose of snapshotting the VM images and then restoring them from backups.
The situation arises when:
B - is a backend
S1 is first NFS server
S2 is second NFS server, both using the same backend.
C1 and C2 are NFS clients each mounting S1 and S2 respectively.
1. C1 opens file F through S1, writes some data to it and does not access the file for a long time. This will cause the fd for F to continue to be cached in S1.
2. C2 replaces F on the common backend with another file of the same name F. The gfids will change of course.
3. Then, C1 does an ls -l on the directory containing new F. If a READDDIRP request is used by NFS, the file handle for the new F will be received by C1.
4. C1 sends a read request using the file handle for the new F. Because the new F was never looked up through S1, S1 will start a hard fh resolution.
5. Because old F was never removed from S1's itable and because of the use of nfs_entry_loc_fill in a particular path in fh resolution, the new F gets resolved to the old F's inode. This happens because entry loc fill ends up telling NFS that the lookup of (dir fh, F) is actually already in the itable(because the old F was not removed through S1). Now, because old F was once before read C1 through S1, the fd was already cached in S1. So the read issued by C1, supposedly for the new F, actually goes to the contents of old F which still exist on the disk, because the disk blocks were never freed due to nfs server fd caching.
This returning of old file's contents is the wrong behaviour.
I've sent a patch to kris. If this is really a problem for customer deployments, we'll discuss that patch again and bring it in.