Bug 1642026 - purge and redeploy has no active rgw daemons
Summary: purge and redeploy has no active rgw daemons
Keywords:
Status: CLOSED DUPLICATE of bug 1633563
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 3.*
Assignee: Sébastien Han
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1641792
TreeView+ depends on / blocked
 
Reported: 2018-10-23 12:11 UTC by John Harrigan
Modified: 2022-02-21 18:08 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-24 15:49:51 UTC
Embargoed:


Attachments (Terms of Use)

Description John Harrigan 2018-10-23 12:11:57 UTC
Description of problem:
  Purge an existing RHCS 3.1 which has RGW daemons. Redeploy and rgw daemons
  do not exist. If you manually delete /var/lib/ceph after the purge, then
  deployed cluster includes rgw daemons.

Version-Release number of selected component (if applicable):
  * ceph-ansible.noarch                   3.1.5-1.el7cp
  * ceph version 12.2.5-42.el7cp (82d52d7efa6edec70f6a0fc306f40b89265535fb) luminous (stable)

How reproducible:
  experienced multiple times

Steps to Reproduce:
1. Existing RHCS 3.1 ceph cluster with RGW daemons
2. Purge the cluster
   # ansible-playbook purge-cluster.yml
3. Deploy the cluster (note there are no active rgw daemons)
   # ansible-playbook site.yml
4. Repeat the procedure, this time manually deleted /var/lib/ceph dir
   Workaround: manually remove “/var/lib/ceph” after purge
   # ansible all -m file -a "name=/var/lib/ceph state=absent"
5. After deploy there are active rgw daemons

Actual results:
  # ceph -s
    health: HEALTH_OK
    osd: 312 osds: 312 up, 312 in
    usage:   42288 MB used, 539 TB / 539 TB avail
  BUT NO RGWs
[root@c07-h01-6048r ~]# ll /var/lib/ceph ← not totally purged!
total 4
drwxr-xr-x.  2 ceph ceph    6 Aug 30 23:41 bootstrap-mds
drwxr-x---.  2 ceph ceph    6 Aug 30 23:41 bootstrap-mgr
drwxr-xr-x.  2 ceph ceph   26 Oct 18 20:18 bootstrap-osd
drwxr-xr-x.  2 ceph ceph    6 Aug 30 23:41 bootstrap-rbd
drwxr-xr-x.  2 ceph ceph   26 Oct 18 20:25 bootstrap-rgw
drwxr-xr-x.  2 ceph ceph    6 Oct  8 17:10 mds
drwxr-xr-x.  2 ceph ceph    6 Oct  8 17:10 mon
drwxr-xr-x. 28 ceph ceph 4096 Oct 18 20:24 osd
drwxr-xr-x.  3 ceph ceph   36 Aug 30 23:41 radosgw
drwxr-xr-x.  2 ceph ceph   37 Aug 30 23:41 tmp

Expected results:
  # ceph -s
  cluster:
    id:     3681dd84-628c-4fa7-8bd5-578b4b06cf5c
    health: HEALTH_OK 
  services:
    mon: 3 daemons, quorum c05-h33-6018r,c06-h29-6018r,c07-h29-6018r
    mgr: c07-h30-6018r(active)
    osd: 312 osds: 312 up, 312 in
    rgw: 12 daemons active 
  data:
    pools:   4 pools, 32 pgs
    objects: 199 objects, 9014 bytes
    usage:   42594 MB used, 539 TB / 539 TB avail
    pgs:     32 active+clean


Additional info:
Using osd_scenario=lvm

Comment 3 Sébastien Han 2018-10-24 15:49:51 UTC
Fixed in 3.1z1, upstream is https://github.com/ceph/ceph-ansible/releases/tag/v3.1.8

*** This bug has been marked as a duplicate of bug 1633563 ***


Note You need to log in before you can comment on or make changes to this bug.