Bug 1304961

Summary: Documentation for purgedata is incorrect
Product: Red Hat Ceph Storage Reporter: Brad Hubbard <bhubbard>
Component: DocumentationAssignee: John Wilkins <jowilkin>
Status: CLOSED CURRENTRELEASE QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact:
Priority: medium    
Version: 1.3.1CC: hnallurv, jowilkin, kdreyer, ngoswami, vashastr
Target Milestone: rc   
Target Release: 1.3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-30 17:19:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Brad Hubbard 2016-02-05 05:28:50 UTC
Description of problem:

In https://access.redhat.com/documentation/en/red-hat-ceph-storage/version-1.3/red-hat-ceph-storage-13-installation-guide-for-rhel-x86-64/#create-cluster we state the following.

"If at any point you run into trouble and you want to start over, execute the following to purge the configuration: 

ceph-deploy purgedata <ceph-node> [<ceph-node>]
ceph-deploy forgetkeys

To purge the Ceph packages too, you may also execute:

ceph-deploy purge <ceph-node> [<ceph-node>]

If you execute purge, you must re-install Ceph."

The actual case is that in order to run "purgedata" you must have uninstalled the ceph packages as this code and a quick test shows.

328 def purgedata(args):
329     LOG.debug(
330         'Purging data from cluster %s hosts %s',
331         args.cluster,
332         ' '.join(args.host),
333         )
334 
335     installed_hosts = []
336     for hostname in args.host:
337         distro = hosts.get(hostname, username=args.username)
338         ceph_is_installed = distro.conn.remote_module.which('ceph')
339         if ceph_is_installed:
340             installed_hosts.append(hostname)
341         distro.conn.exit()
342 
343     if installed_hosts:
344         LOG.error("Ceph is still installed on: %s", installed_hosts)
345         raise RuntimeError("refusing to purge data while Ceph is still installed")

$ ceph-deploy purgedata dell-per320-07
...
[ceph_deploy.install][ERROR ] Ceph is still installed on: ['dell-per320-07']
[ceph_deploy][ERROR ] RuntimeError: refusing to purge data while Ceph is still installed

"purgedata" is actually used to clean up /var/lib/ceph and /etc/ceph after ceph is uninstalled.

I believe the confusion comes from the upstream documentation for hammer here.

  http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/

It states.

"Purge Data

To remove all data from /var/lib/ceph (but leave Ceph packages intact), execute the purgedata command.

    ceph-deploy purgedata {hostname} [{hostname} ...]"

I'll create a tracker for the upstream docs and link it here.

Version-Release number of selected component (if applicable):
1.3

Comment 1 Brad Hubbard 2016-02-05 05:33:21 UTC
Turns out there are two trackers for this upstream already.

http://tracker.ceph.com/issues/14636
http://tracker.ceph.com/issues/12481

Comment 3 Vasishta 2016-09-09 13:42:24 UTC
Hi John, 
gitlab link is inaccessible. 
Can you please provide the actual doc links to verify the fix ?