Bug 1499806

Summary: [CephFS]:- Client_metadata not populated in session info after client eviction and reconnect
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: shylesh <shmohan>
Component: CephFSAssignee: Yan, Zheng <zyan>
Status: CLOSED ERRATA QA Contact: Parikshith <pbyregow>
Severity: low Docs Contact:
Priority: low    
Version: 3.0CC: ceph-eng-bugs, ceph-qe-bugs, hnallurv, john.spray, kdreyer, pdonnell, zyan
Target Milestone: z2   
Target Release: 3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.4-1.el7cp Ubuntu: ceph_12.2.4-2redhat1 Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-26 17:38:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1548067    
Bug Blocks:    

Comment 3 Yan, Zheng 2017-10-10 02:36:36 UTC
I can't reproduce this by using upstream 12.2.1 (single mds). did your test have multiple mds. (I suspect the session was opened during metadata migration, Server::prepare_force_open_sessions)

Comment 5 Yan, Zheng 2017-10-10 10:03:03 UTC
External Bug ID: Ceph Project Bug Tracker 21746
RP https://github.com/ceph/ceph/pull/18215

The fix does not solve the issue completely

when mds_session_blacklist_on_evict is false, "session evict" command never works well.
because mds may forcely open client session during subtree migration. (importer mds open sessions that are open in exporter mds)

Comment 6 Patrick Donnelly 2017-10-14 00:36:27 UTC
https://github.com/ceph/ceph/pull/18299

Comment 7 Harish NV Rao 2017-10-18 10:04:15 UTC
(In reply to Patrick Donnelly from comment #6)
> https://github.com/ceph/ceph/pull/18299

@Patrick, looks like there is fix available for this bug. The target release is set to 3.1.

Will this be fixed in 3.0 or 3.1?

Comment 14 Yan, Zheng 2018-03-23 02:04:36 UTC
this one is trick to test, pleas


1. setup 2 active mds
2. ceph-fuse mount
3. mkdir dir0 dir1
4. setfattr -n ceph.dir.pin -v 0 dir0; setfattr -n ceph.dir.pin -v 1 dir1
5. ls dir1
6. mkdir dir0/dir; cd dir0/dir
7. evict client session from mds0
8. setfattr -n ceph.dir.pin -v 1 dir0 # export dir0 to mds.1
9. check client metadata on mds.1

Comment 16 Yan, Zheng 2018-03-28 11:07:47 UTC
did you run step 8 on the client that was evicted? If you are, please run step 8 on another client.

Comment 18 Yan, Zheng 2018-03-28 13:40:12 UTC
sorry. I made some mistake.

1. setup 2 active mds. set config "mds session blacklist on evict" to false on both mds
2. create two ceph-fuse mounts. 
3. client.0: mkdir dir0 dir1
4. client.0: setfattr -n ceph.dir.pin -v 0 dir0; setfattr -n ceph.dir.pin -v 1 dir1; sleep 10
5. client.0: touch dir1/file
6. client.0: mkdir dir0/dir; cd dir0/dir
7. evict client.0's session from mds0
8. client.1: setfattr -n ceph.dir.pin -v 0 dir1 # export dir1 to mds.0
9. check client sessions on mds.0 # session for client.0 should exist, but have no client_metadata
10. ceph --admin-daemon client.0.asok kick_stale_sessions
11. client.0: run command ls
12. check client metadata on mds.0 # client.0' session should have client_metadata at this time

Comment 20 Yan, Zheng 2018-03-29 13:46:47 UTC
yes

Comment 25 errata-xmlrpc 2018-04-26 17:38:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1259