Bug 2273836
Summary: | fix slow ops and last_leader logic | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Aviv Caro <acaro> |
Component: | NVMeOF | Assignee: | Alexander Indenbaum <aindenba> |
Status: | CLOSED ERRATA | QA Contact: | Manohar Murthy <mmurthy> |
Severity: | high | Docs Contact: | ceph-doc-bot <ceph-doc-bugzilla> |
Priority: | unspecified | ||
Version: | 7.1 | CC: | akraj, cephqe-warriors, hchebrol, mmurthy, sunnagar, tserlin, vereddy |
Target Milestone: | --- | ||
Target Release: | 7.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-18.2.1-129.el9cp | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2024-06-13 14:31:24 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Aviv Caro
2024-04-07 06:54:14 UTC
Fixed in ceph-18.2.1-129 by commit 9a9a02fac84496a5c2c6c8bed5b0a5b83560cfca Tested with build: nvmeof_image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:1.2.4-1 nvmeof_cli_image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:1.2.4-1 With the latest build, we are not facing slowops issue, its working fine http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/openstack/RH/7.1/rhel-9/Regression/18.2.1-150/nvmeotcp/103/tier-2_nvmeof_functional_Regression/ Hence, closing this. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925 |