Bug 1846461

Summary: RHCS 4 CephFS Documentation Bugs
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Patrick Donnelly <pdonnell>
Component: DocumentationAssignee: Aron Gunn <agunn>
Status: CLOSED CURRENTRELEASE QA Contact: Hemanth Kumar <hyelloji>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.0CC: agunn, hyelloji, kdreyer, sostapov, tchandra
Target Milestone: z2   
Target Release: 4.1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-01-12 20:45:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1859104    

Description Patrick Donnelly 2020-06-11 15:50:12 UTC
Meta comment: this is a BZ tracking several issues with the RHCS 4 documentation that I've found in a review. Scott Ostapovicz / docs asked me to create a 4.1z1 BZ to address this.

Description of problem:

* Section 2.4/2.5: these configurations are obsolete. Remove.

* Section 2.7: Prcedure 2 (mds deactivate) is unnecessary. The Monitors now do this automatically. Instead, poll `ceph fs status` for changes until the cluster stabilizes to
the new max_mds.

* Section 3.3: we should recommend using the new `ceph fs authorize` command: https://docs.ceph.com/docs/nautilus/cephfs/client-auth/

* Section 3.3: add discussion for the snapshot/layout caps: https://docs.ceph.com/docs/nautilus/cephfs/client-auth/#layout-and-quota-restriction-the-p-flag

* Section 3.4.2: The mount.ceph command now reads the local /etc/ceph/ceph.conf to learn the secret (and other options). It should only be necessary to specify the "name=" option. (It will still be useful to specify how to do this manually perhaps but the easy way should be the recommendation.)

* Section 3.4.3: also fstab should not need to specify the secretfile anymore, same reason as for 3.4.2.

* Section 3.6: we should _not_ show creating a file system with a default data pool that is EC. (We already have an "Important" marker saying: "Red Hat recommends to use the replicated pool as the default data pool.") The section should use the `fs add_data_pool` command instead and show setting a directory layout so that the new EC pool is used. That corresponds to point "6." in this section but it does not show setting the directory layout yet (See section 4.4.2.)

* New Section 3.X: Document cluster down: https://docs.ceph.com/docs/nautilus/cephfs/administration/#taking-the-cluster-down

* New Section 3.X: Document taking cluster down rapidly: https://docs.ceph.com/docs/nautilus/cephfs/administration/#taking-the-cluster-down-rapidly-for-deletion-or-disaster-recovery

* New Section 3.X: Document standby replay: https://docs.ceph.com/docs/nautilus/cephfs/standby/#configuring-standby-replay

* New Section 3.X: Document standby_count_wanted: https://docs.ceph.com/docs/nautilus/cephfs/standby/#managing-failover

* New Section 3.X: Document client eviction: https://docs.ceph.com/docs/nautilus/cephfs/eviction/

* Section 3.7: I suggest this should be moved to a new section titled "Programmatic Volume Management" which documents this new "fs volume" and "fs subvolume" interface. (As I noted in another BZ, it's weird to only document cloning subvolumes from snapshots.) See: https://docs.ceph.com/docs/nautilus/cephfs/fs-volumes/

* Section 4.2: To set pins, the +p cap is required (see note on Section 3.3)

* Section 4.7: Procedure 1-3: `ceph fs set name cluster_down true`... condenses to single command -> `ceph fs fail name`

Comment 3 Hemanth Kumar 2020-09-16 09:39:34 UTC
All the changes made to the doc looks good to me. Moving to verified.