Bug 1416923

Summary: RHSC - pgs stuck unclean after attempting to add already existing pool
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Vikhyat Umrao <vumrao>
Component: CephAssignee: Shubhendu Tripathi <shtripat>
Ceph sub component: events QA Contact: sds-qe-bugs
Status: CLOSED DUPLICATE Docs Contact:
Severity: high    
Priority: high CC: nthomas, tcole
Version: 2   
Target Milestone: ---   
Target Release: 3   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-08 22:55:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vikhyat Umrao 2017-01-26 19:38:25 UTC
Description of problem:
RHSC - pgs stuck unclean after attempting to add already existing pool 

While adding a pool which already exists from Console, caused all placement groups of this pool to go unclean - remapped and undersized.

All of which reference a nonexistent OSD: 2147483647. The


Version-Release number of selected component (if applicable):
Red Hat Storage Console 

Logs have been requested. 

This happened from RHSC. CLI works fine, looks like the issue with Console APIs, maybe the return codes have some issue because pool creation failed but APIs have done some part of work in PGs of this pool which caused all pgs to go unclean and get mapped to a stale osd id 2147483647.

Additional info:

For now, we have fixed this issue as given below because this cluster was a branch new cluster. 

1. Created new pool with different name
2. Ran 'cppool' to this new pool
3. Deleted old pool which had issue
4. Renamed new pool to old pool name

- Ceph cluster and clients is working fine.