Description of problem: currently, once we have a glfs object in hand for a given volume, after deleting and recreating the volume with same name, we still can access new volume using the old glfs object, which is wrong. Version-Release number of selected component (if applicable): mainline How reproducible: 1. write a gfapi program, once you are done calling glfs_init() try creating a file in the volume, now apply break-point there. 2. delete the volume and recreate the volume with the same name. 3. now continue with your program, in the next lines try creating another file in the volume using the same old glfs object 4. surprisingly it allows to create. My use-case was more like calling glfs_get_volumeid() returns old volume id rather than throwing an error which should say glfs object is not valid or worst case return new volume id, but in my case it returned old uuid. Refer https://bugzilla.redhat.com/show_bug.cgi?id=1461808#c9 for some more interesting context and sample programs. Actual results: with old glfs object we still can access new volume Expected results: return invalid object.
Created attachment 1491665 [details] Small test program I am unable to reproduce the problem. Could you explain what I am missing, or better yet, adapt the program? Compile: $ gcc -Wall -g -D_FILE_OFFSET_BITS=64 -D__USE_FILE_OFFSET64 -DUSE_POSIX_ACLS=1 -I/usr/include/uuid -lacl -lgfapi -lglusterfs -lgfrpc -lgfxdr -luuid test-volume-recreate.c -o test-volume-recreate Run: $ ./test-volume-recreate $HOSTNAME bz1463192 $PWD/bz1463192.log volume create: bz1463192: success: please start the volume to access data volume start: bz1463192: success glfs_set_volfile_server : returned 0 glfs_set_logging : returned 0 glfs_init : returned 0 volume stop: bz1463192: success volume delete: bz1463192: success volume create: bz1463192: success: please start the volume to access data volume start: bz1463192: success glfs_h_anonymous_write : returned error -1 (Stale file handle) glfs_fini(fs) returned 0
The above was tested on glusterfs-4.1.5.
Prasanna, I would mark this as CLOSED Upstream, mainly to highlight that this is not critical on OCS setup, and also looks like on upstream master, the issue is not happening too.