Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1559749 - [CephFS]: IO is hanging while doing rsync
[CephFS]: IO is hanging while doing rsync
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: CephFS (Show other bugs)
3.0
Unspecified Unspecified
high Severity high
: z3
: 3.0
Assigned To: Patrick Donnelly
Shreekar
Aron Gunn
:
: 1594760 (view as bug list)
Depends On:
Blocks: 1557269
  Show dependency treegraph
 
Reported: 2018-03-23 04:20 EDT by Shreekar
Modified: 2018-06-29 19:58 EDT (History)
14 users (show)

See Also:
Fixed In Version: RHEL: ceph-12.2.4-10.el7cp Ubuntu: ceph_12.2.4-14redhat1xenial
Doc Type: Bug Fix
Doc Text:
.Reducing the number of active MDS daemons on CephFS no longer causes kernel client's I/O to hang Previously, reducing the number of active Metadata Server (MDS) daemons on a Ceph File System (CephFS) would cause kernel client's I/O to hang. When this happens, kernel clients were unable to connect to MDS ranks greater than or equal to `max_mds`. This issue has been fixed in this release.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-05-15 14:20:31 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 22969 None None None 2018-04-25 13:13 EDT
Ceph Project Bug Tracker 23154 None None None 2018-04-25 13:03 EDT
Red Hat Product Errata RHBA-2018:1563 None None None 2018-05-15 14:21 EDT

  None (edit)
Description Shreekar 2018-03-23 04:20:03 EDT
Description of problem:
Running automated scripts for testing rsync module,but IOs were hung for long hours. Different IO are used(dd,fio,crefi,touch)

Version-Release number of selected component (if applicable):
ceph version 12.2.4-4.el7cp (bfc2b497ab362f2b3afa7bd1f9d0053f74b60d66) luminous (stable)

How reproducible:
Always

Steps to Reproduce:
1.Setup ceph cluster and mount on ceph-fuse and kernel clients on same mount point
2.Try to do IOs on machine and mount point,use rsync module to sync data from local to mount dir and data on mount dir to local.

Actual results:
IOs were hung

Expected results:

IOs should be successful and sync should happen

Additional info:
Logs of clients and 2 active mdss are attached
Comment 16 Yan, Zheng 2018-03-28 21:21:29 EDT
looks like ceph_mdsc_open_export_target_session(mdsc, target) return error. the function only return -ENOMEM (unlikely in this case) and -EINVAL. It return -EINVAL when "target >= mdsmap->m_max_mds". did you change max_mds from 2 to 1 during the test?
Comment 17 Shreekar 2018-03-28 23:58:49 EDT
(In reply to Yan, Zheng from comment #16)
> looks like ceph_mdsc_open_export_target_session(mdsc, target) return error.
> the function only return -ENOMEM (unlikely in this case) and -EINVAL. It
> return -EINVAL when "target >= mdsmap->m_max_mds". did you change max_mds
> from 2 to 1 during the test?

Yes,for previous test cleanup
Comment 39 errata-xmlrpc 2018-05-15 14:20:31 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1563
Comment 40 Patrick Donnelly 2018-06-29 19:58:06 EDT
*** Bug 1594760 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.