Hide Forgot
io-cache during reads returns zero filled stat structure. In this situation, the server-side io-cache is sending zero filled stat structure during read. On the client side this zero filled stat structure is used to compare the mtimes to check whether the file has changed. Since mtime in the stat sent by server during read is zero, mtime stored in ioc_inode differs with it and cache is flushed on client side. As a fix, io-cache in ioc_fault_cbk should update/compare stat only it is not completely zero filled.
Hi All! This is actually for gluster-3.0pre1, not the mainline. I'm excited to see the new io-cache working well with large caches. It's successfully caching a large 1gb block of data now, which is great. I did notice an odd interaction bug between server-side and client-side caches though. It seems that when I turn on server-side caching, the client will then always read the data from the server instead of from it's own cache. Here are my spec files: ------------------------------------------- server.vol: volume posix type storage/posix option directory /tmp/gluster end-volume volume posix-locks type features/locks subvolumes posix end-volume volume io-cache type performance/io-cache option cache-size 4gb subvolumes posix-locks end-volume volume io-threads type performance/io-threads subvolumes io-cache end-volume volume server type protocol/server subvolumes io-threads option transport-type tcp option auth.addr.io-threads.allow * end-volume ------------------------------------------- client.vol: volume client type protocol/client option transport-type tcp/client option remote-host server option remote-subvolume io-threads end-volume volume io-cache type performance/io-cache option cache-size 4gb subvolumes client end-volume ------------------------------------------- Watching the memory usage for the gluster process as I copy my 1gb test file around, I can see the server's memory usage climb to 1gb, which makes sense. I also see the memory usage on the client climb to 1gb. This goes at gigabit link speed. However, when I copy the same file a minute later, the client's memory usage drops to 0 and then rises to 1gb again. The copy also runs at link speed. And as a control, I also tried out this server.vol: ------------------------------------------- volume posix type storage/posix option directory /tmp/gluster end-volume volume posix-locks type features/locks subvolumes posix end-volume volume io-threads type performance/io-threads subvolumes posix-locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.io-threads.allow * subvolumes io-threads end-volume ------------------------------------------- The first copy goes at link, and the second copy is read out of memory on the client, which is what I'd expect. So, I expect this means that the server-side cache is telling the client-side that the file's dirty, and so the client copies the data again. Still, this isn't that horrible of a bug, though, since it's still correct-but-slow behavior, and the kernel does a pretty good job of keeping files in memory without io-cache enabled on the server. Still, it'd be a nice thing to have working :)
PATCH: http://patches.gluster.com/patch/2315 in master (performance/io-cache: don't use stat got in read_cbk if it is zero-filled.)
I just tested the master branch and git, and it properly reads the file from the client cache :) Thanks for the quick fix!
(In reply to comment #3) > I just tested the master branch and git, and it properly reads the file from > the client cache :) Thanks for the quick fix! you are welcome :)
PATCH: http://patches.gluster.com/patch/2478 in release-2.0 (performance/io-cache: don't use stat got in read_cbk if it is zero-filled.)