Bug 2018110
Summary: | Ceph monitor crash after upgrade from ceph | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Venky Shankar <vshankar> |
Component: | CephFS | Assignee: | Patrick Donnelly <pdonnell> |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 5.0 | CC: | ceph-eng-bugs, tserlin, vereddy |
Target Milestone: | --- | ||
Target Release: | 5.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.6-20.el8cp | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-04-04 10:22:22 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Venky Shankar
2021-10-28 09:35:06 UTC
Upgraded from 14.2.11-208.el8cp to 16.2.0-146.el8cp did not observe any clone or abort operations in mon logs attached all mon logs from all the nodes IOs were running while upgrade also Setup details before upgrade : [root@ceph-bz-verify-jgwibw-node7 ~]# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] [root@ceph-bz-verify-jgwibw-node7 ~]# ceph df RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 180 GiB 168 GiB 83 MiB 12 GiB 6.71 TOTAL 180 GiB 168 GiB 83 MiB 12 GiB 6.71 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL cephfs_data 1 132 B 1 192 KiB 0 53 GiB cephfs_metadata 2 42 KiB 22 1.5 MiB 0 53 GiB .rgw.root 3 1.3 KiB 4 768 KiB 0 53 GiB default.rgw.control 4 0 B 8 0 B 0 53 GiB default.rgw.meta 5 374 B 2 384 KiB 0 53 GiB default.rgw.log 6 3.5 KiB 49 6.2 MiB 0 53 GiB rbd 7 0 B 0 0 B 0 53 GiB [root@ceph-bz-verify-jgwibw-node7 ~]# ceph -s cluster: id: 37dc81e0-e59c-4aa0-b819-abeb2eee717b health: HEALTH_WARN 1 pools have too few placement groups mons are allowing insecure global_id reclaim services: mon: 3 daemons, quorum ceph-bz-verify-jgwibw-node2,ceph-bz-verify-jgwibw-node3,ceph-bz-verify-jgwibw-node1-installer (age 35h) mgr: ceph-bz-verify-jgwibw-node1-installer(active, since 35h), standbys: ceph-bz-verify-jgwibw-node2 mds: cephfs:1 {0=ceph-bz-verify-jgwibw-node5=up:active} 2 up:standby osd: 12 osds: 12 up (since 35h), 12 in (since 35h) rgw-nfs: 2 daemons active (ceph-bz-verify-jgwibw-node4, ceph-bz-verify-jgwibw-node6) data: pools: 7 pools, 208 pgs objects: 86 objects, 47 KiB usage: 12 GiB used, 168 GiB / 180 GiB avail pgs: 208 active+clean [root@ceph-bz-verify-jgwibw-node7 ~]# ceph fs set cephfs allow_standby_replay false [root@ceph-bz-verify-jgwibw-node7 ~]# ceph version ceph version 14.2.11-208.el8cp (6738ba96f296a41c24357c12e8d594fbde457abc) nautilus (stable) After Upgrade : [root@ceph-bz-verify-jgwibw-node7 cephfs_fuse]# ceph -s cluster: id: 37dc81e0-e59c-4aa0-b819-abeb2eee717b health: HEALTH_WARN mons are allowing insecure global_id reclaim 1 pools have too few placement groups services: mon: 3 daemons, quorum ceph-bz-verify-jgwibw-node2,ceph-bz-verify-jgwibw-node3,ceph-bz-verify-jgwibw-node1-installer (age 29m) mgr: ceph-bz-verify-jgwibw-node1-installer(active, since 18m), standbys: ceph-bz-verify-jgwibw-node2 mds: 1/1 daemons up, 2 standby osd: 12 osds: 12 up (since 24m), 12 in (since 37h) rgw-nfs: 2 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 209 pgs objects: 220 objects, 474 MiB usage: 3.3 GiB used, 177 GiB / 180 GiB avail pgs: 209 active+clean io: client: 2.7 KiB/s rd, 21 MiB/s wr, 1 op/s rd, 33 op/s wr [root@ceph-bz-verify-jgwibw-node7 cephfs_fuse]# ceph version ceph version 16.2.0-146.el8cp (56f5e9cfe88a08b6899327eca5166ca1c4a392aa) pacific (stable) Setup Details : ceph-BZ_Verify-JGWIBW-node7 10.0.208.121 ceph-BZ_Verify-JGWIBW-node6 10.0.211.48 ceph-BZ_Verify-JGWIBW-node5 10.0.209.32 ceph-BZ_Verify-JGWIBW-node4 10.0.210.249 ceph-BZ_Verify-JGWIBW-node3 10.0.209.157 ceph-BZ_Verify-JGWIBW-node2 10.0.209.20 ceph-BZ_Verify-JGWIBW-node1-installer 10.0.211.212 @vshankar , Can you please confirm if this is sufficent. > @vshankar ,
> Can you please confirm if this is sufficent.
Looks good.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174 |