Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2342033

Summary: [GSS][CephFS] Adding mpath osds causing corruption
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: kelwhite
Component: CephFSAssignee: Venky Shankar <vshankar>
Status: CLOSED NOTABUG QA Contact: sumr
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 6.1CC: aglotov, bkunal, ceph-eng-bugs, cephqe-warriors, gfarnum, knakai, lithomas, mashetty, mduasope, mmanjuna, ngangadh, pdhange, pdonnell, rsachere, smitra, sshome, tpetr, vshankar
Target Milestone: ---Flags: pdhange: needinfo-
sshome: needinfo-
pdhange: needinfo-
gfarnum: needinfo-
Target Release: 8.1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-04-30 08:12:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2342728, 2343968, 2343973    
Bug Blocks:    

Description kelwhite 2025-01-24 22:59:38 UTC

Comment 25 kelwhite 2025-01-28 01:09:40 UTC
Still have damaged metadata:

sh-5.1$ ceph -s
  cluster:
    id:     82d0f530-de8d-400d-a41f-10b92a738e80
    health: HEALTH_ERR
            1 MDSs report damaged metadata
            insufficient standby MDS daemons available
            noscrub,nodeep-scrub flag(s) set
            11 pgs not deep-scrubbed in time
            25 daemons have recently crashed
 
  services:
    mon: 5 daemons, quorum m,n,o,p,q (age 39m)
    mgr: b(active, since 12h), standbys: a
    mds: 1/1 daemons up
    osd: 24 osds: 18 up (since 17h), 18 in (since 17h)
         flags noscrub,nodeep-scrub
    rgw: 2 daemons active (2 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   12 pools, 473 pgs
    objects: 35.53M objects, 16 TiB
    usage:   66 TiB used, 78 TiB / 144 TiB avail
    pgs:     473 active+clean
 
  io:
    client:   5.7 KiB/s wr, 0 op/s rd, 0 op/s wr

Comment 63 kelwhite 2025-02-03 16:40:12 UTC
Setting need info on greg for c#62

Comment 70 Red Hat Bugzilla 2025-08-29 04:25:07 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days