Bug 1416923 - RHSC - pgs stuck unclean after attempting to add already existing pool
Summary: RHSC - pgs stuck unclean after attempting to add already existing pool
Keywords:
Status: CLOSED DUPLICATE of bug 1401906
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: Ceph
Version: 2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3
Assignee: Shubhendu Tripathi
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-26 19:38 UTC by Vikhyat Umrao
Modified: 2020-03-11 15:39 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-08 22:55:01 UTC
Embargoed:


Attachments (Terms of Use)

Description Vikhyat Umrao 2017-01-26 19:38:25 UTC
Description of problem:
RHSC - pgs stuck unclean after attempting to add already existing pool 

While adding a pool which already exists from Console, caused all placement groups of this pool to go unclean - remapped and undersized.

All of which reference a nonexistent OSD: 2147483647. The


Version-Release number of selected component (if applicable):
Red Hat Storage Console 

Logs have been requested. 

This happened from RHSC. CLI works fine, looks like the issue with Console APIs, maybe the return codes have some issue because pool creation failed but APIs have done some part of work in PGs of this pool which caused all pgs to go unclean and get mapped to a stale osd id 2147483647.

Additional info:

For now, we have fixed this issue as given below because this cluster was a branch new cluster. 

1. Created new pool with different name
2. Ran 'cppool' to this new pool
3. Deleted old pool which had issue
4. Renamed new pool to old pool name

- Ceph cluster and clients is working fine.


Note You need to log in before you can comment on or make changes to this bug.