Created attachment 1420491 [details]
Screenshot of kernel client's console
Description of problem:
With kclient having fstab entry with mount point,failed to come up after reboot. Clients are VMs. This had happened while IOs were happening with other clients(2 fuse and 1 kernel)
Version-Release number of selected component (if applicable):
Ceph->ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)
Os-> RHEL 7.5
Steps to Reproduce:
1. Set up ceph cluster,with 4 clients (2 fuse and 2 kernel)
2. Do fstab entry on each client and make sure that mount point is exists
3. Do IOs in parallel with 3 clients and one client to reboot, after reboot of client, IOs will start on rebooted client
Kernel client failed to come up after reboot,while fuse clients came up successfuly
Kernel clients should come up after reboot and do IOs
Screen shot of client's console is added
yes, should be QE setup issue. does fstab entry includes _netdev ?
please attach kernel dmesg
mount did happen before network was online. I don't know why _netdev option did not work. try putting _netdev at the beginning of option list?
I think this is mount(8) issue. Shreekar, could you check if 'mount -a -O no_netdev' works as expected.
have you tried fstab entry with secret instead of secretfile
FYI: this can be http://tracker.ceph.com/issues/24202
please ignore my previous comment
still can't reproduce this locally, could you setup a test environment for me.
Yes, I have mailed the setup info
I think this issue happens only when there are extensive IOs. I checked mds log, mds did get session open request from client. But it took several minutes to flush log event of session open. client mount timeout before log event get flushed.
2018-06-20 10:38:03.216253 7f3eae373700 5 mds.1.log _submit_thread 3851465531~256 : ESession client.825001 172.16.115.94:0/1151192669 open cmapv 179381
2018-06-20 10:40:42.718710 7f3eaf375700 10 mds.1.server _session_logged client.825001 172.16.115.94:0/1151192669 state_seq 1 open 179381
I think this is cluster config issue. cephfs data pool and metadata pool are on the same set of OSDs. Heavy data IOs significantly metadata IOs.
Zheng, I think we could detect this situation by monitoring the average RTT of object writes for the journal. Then we would be able to give an cluster health warning. What do you think?
yes, it makes sense to me
Please retest this BZ with the fix from bz1607590. If the problem is related to the MDS suffering from slow writes to the OSDs, then this issue should be closed. (It's a cluster configuration issue and not a real bug.)
client can't connect to monitor, this is more like network issue.
If network is not reachable, what else cephfs can do?
Network disconnect happens only when there's a kernel fstab entry, the normal reboot of kernel client without a fstab entry does not cause any issues like network disconnection.
Systemd tried to mount cephfs before network was ready. Mount got stuck, systemd had no change to start network.
I still thank this is mount(8) issue. try putting _netdev before other options.
The content is already published on the Portal: