Description of Problem:
GConf can not handle stale locks if usr home directory is on NFS volume.
Steps to Reproduce:
1. configure your home directory on NFS server. (autofs or /etc/fstab)
3. from the foot menu select reboot the system.
GConf error dialog.
~/.gconfd/ contains some .nfs* files
rm -rf ~/.gconfd and relogin solves the problem.
In general "handle stale locks" is an impossibility; if the lock is a lock, then
it has to lock other things out; if other things can "handle" the lock, then
they wouldn't be locked out. If the lock is stale, that means it appears to be
locked but isn't; and if it appears to be locked then there's no way to tell
that it isn't. If you could tell, then it wouldn't be stale, it would just not
be a lock at all. ;-) Anyway. ;-)
Check a couple things:
- is this a dup of bug #59245
- are you running the nfslock service on both client and server
Well, it does look like a dup indeed.
"the clients tell the server that they have entered a clean new boot: the
server should drop the client's locks at that point." but something fails and
server does not drop the locks. I tried solaris 2.6 and red hat 7.2 as nfs
servers with identical results.
lockd is running of course.
You just rebooted normally though, no kernel crash or power buttons involved?
No crash of power button. GConfd is still running when shutdown initiated and
does not exit properly. That is why locks are left behind. Just follow my
scenario and you get the same results every time.
We are seeing the same thing here.
Question: Why isn't gconfd shut down properly on reboot?
gconfd should in theory get killall'd -TERM just like everything else on reboot.
I have a feeling that NFS starts its shut down process before gconfd gets killed
by killall -TERM. There is apparently a (failing) attempt to kill gconfd by NFS
shutdown code since running gconfd prevents clean NFS umount.
We now pop up a dialog asking if you want to remove stale locks.