Hide Forgot
Description of problem: ======================= I have observed this inconsistently but quite occassionally that even after I umount the fuse mount even with using -l (forceful) option, I see the fuse mount process still existing For eg I unmounted /mnt/rep2 but ps -ef|grep rep2 shows me as below root 32337 1 0 Dec02 ? 00:24:02 /usr/sbin/glusterfs --volfile-server=10.70.35.196 --volfile-id=rep2 /mnt/rep2 I did try check the fuse log but didnt find anything helpful Version-Release number of selected component (if applicable): ============ 3.8.4-6 This can lead to memory consumption unwantedly [root@rhs-client45 rep2]# top -n 1 -b|grep 32337 32337 root 20 0 4775952 3.760g 4348 S 0.0 24.3 24:02.77 glusterfs
client statedump post umount with stale process seen [qe@rhsqe-repo bug.1401473]$ pwd /var/www/html/sosreports/nchilaka/bug.1401473
Nag, the -l option is to perform a lazy umount. There is no force option for fuse mounts. Do you see the "Unmounting <fuse mount point" message in the mount log (I'm not sure how to access the logs in comment #2)? The umount will happen only after all programs accessing the mount stop and you 'cd` out of the mount point.
We are deferring this to 3.2.0_beyond as we don't see visible impact other than resource consumption by the stale process
(In reply to nchilaka from comment #4) > We are deferring this to 3.2.0_beyond as we don't see visible impact other > than resource consumption by the stale process That seems reasonable Nag. But nevertheless, please attach the mount log and the statedump to the BZ as requested for in comment #3.
(In reply to Ravishankar N from comment #3) > Nag, the -l option is to perform a lazy umount. There is no force option for > fuse mounts. Do you see the "Unmounting <fuse mount point" message in the > mount log (I'm not sure how to access the logs in comment #2)? The umount > will happen only after all programs accessing the mount stop and you 'cd` > out of the mount point. I did cd out from all terminals. I did see the issue then.
Can this be tested with later versions, and verified? We have not hit the issues in long time now.