I can't reproduce this by using upstream 12.2.1 (single mds). did your test have multiple mds. (I suspect the session was opened during metadata migration, Server::prepare_force_open_sessions)
External Bug ID: Ceph Project Bug Tracker 21746 RP https://github.com/ceph/ceph/pull/18215 The fix does not solve the issue completely when mds_session_blacklist_on_evict is false, "session evict" command never works well. because mds may forcely open client session during subtree migration. (importer mds open sessions that are open in exporter mds)
https://github.com/ceph/ceph/pull/18299
(In reply to Patrick Donnelly from comment #6) > https://github.com/ceph/ceph/pull/18299 @Patrick, looks like there is fix available for this bug. The target release is set to 3.1. Will this be fixed in 3.0 or 3.1?
this one is trick to test, pleas 1. setup 2 active mds 2. ceph-fuse mount 3. mkdir dir0 dir1 4. setfattr -n ceph.dir.pin -v 0 dir0; setfattr -n ceph.dir.pin -v 1 dir1 5. ls dir1 6. mkdir dir0/dir; cd dir0/dir 7. evict client session from mds0 8. setfattr -n ceph.dir.pin -v 1 dir0 # export dir0 to mds.1 9. check client metadata on mds.1
did you run step 8 on the client that was evicted? If you are, please run step 8 on another client.
sorry. I made some mistake. 1. setup 2 active mds. set config "mds session blacklist on evict" to false on both mds 2. create two ceph-fuse mounts. 3. client.0: mkdir dir0 dir1 4. client.0: setfattr -n ceph.dir.pin -v 0 dir0; setfattr -n ceph.dir.pin -v 1 dir1; sleep 10 5. client.0: touch dir1/file 6. client.0: mkdir dir0/dir; cd dir0/dir 7. evict client.0's session from mds0 8. client.1: setfattr -n ceph.dir.pin -v 0 dir1 # export dir1 to mds.0 9. check client sessions on mds.0 # session for client.0 should exist, but have no client_metadata 10. ceph --admin-daemon client.0.asok kick_stale_sessions 11. client.0: run command ls 12. check client metadata on mds.0 # client.0' session should have client_metadata at this time
yes
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1259