Bug 1687660 - Remove mentions of gdeploy
Summary: Remove mentions of gdeploy
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Deploying_RHHI
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.6
Assignee: Laura Bailey
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: RHHI-V-1-6-Documentation-InFlight-BZs
TreeView+ depends on / blocked
 
Reported: 2019-03-12 05:07 UTC by Laura Bailey
Modified: 2019-05-20 04:19 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-20 04:19:15 UTC
Embargoed:


Attachments (Terms of Use)
cache_inventory.yml (1.40 KB, text/plain)
2019-04-02 10:30 UTC, Gobinda Das
no flags Details
lvm_cache.yml (161 bytes, text/plain)
2019-04-02 10:30 UTC, Gobinda Das
no flags Details
replace_node_inventory.yml (2.80 KB, text/plain)
2019-04-02 10:31 UTC, Gobinda Das
no flags Details
replace_node.yml (191 bytes, text/plain)
2019-04-02 10:31 UTC, Gobinda Das
no flags Details
arbitrated_replicated_inventory.yml (5.01 KB, text/plain)
2019-04-02 10:32 UTC, Gobinda Das
no flags Details
arbitrated_replicated.yml (268 bytes, text/plain)
2019-04-02 10:33 UTC, Gobinda Das
no flags Details
normal_replicated_inventory.yml (4.28 KB, text/plain)
2019-04-02 10:33 UTC, Gobinda Das
no flags Details
normal_replicated.yml (264 bytes, text/plain)
2019-04-02 10:34 UTC, Gobinda Das
no flags Details

Comment 18 Sahina Bose 2019-03-29 11:13:30 UTC
(In reply to Laura Bailey from comment #15)
> Based on Sas in gChat:
> @Laura Bailey RHGS SSL/TLS encryption is not available with gluster-ansible.
> So, users that need such encryption, should go with manual steps to enable
> them as per RHGS Admin guide instructions.
> We can't fall back or contain gdeploy steps for this. Remember, we have
> completely removed gdeploy from RHVH platform, we can only use
> gluster-ansible.
> 
> - Scaling is handled because add new nodes and new volume seems to just need
> a screenshot replaced:
> https://access.redhat.com/documentation/en-us/
> red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html-single/
> maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/#task-
> cockpit-gluster_mgmt-expand_cluster
> 

This captures scaling by adding new nodes and creating new volumes.
Now that volume expansion is supported, it's also possible to scale by adding new nodes and expanding existing volumes. I think you've captured this in the expand volume section?

> 
> But I need to know the correct ways to do the following in RHHI-V 1.6.
> 
> 
> ====================================
> SSL/TLS CONFIGURATION: HELP
> 
> https://access.redhat.com/documentation/en-us/
> red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html-single/
> maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/
> #configure-encryption
> 
> QUESTION: What exactly did set_up_encryption.conf gdeploy conf file do?
> Was it just the following, with self-signed certs?
>

+Sachi to answer this 

> 
> ---
>  Configure Certificate Authority signed encryption
> Important
> 
> Ensure that you have appropriate certificates signed by a Certificate
> Authority before proceeding. Obtaining certificates is outside the scope of
> this document.
> 
>     Place certificates in the following locations on all nodes.
> 
>     /etc/ssl/glusterfs.key
>         The node’s private key. 
>     /etc/ssl/glusterfs.pem
>         The certificate signed by the Certificate Authority, which becomes
> the node’s certificate. 
>     /etc/ssl/glusterfs.ca
>         The Certificate Authority’s certificate. 
> 
>     Stop all volumes
> 
>     # gluster volume stop all
> 
>     Restart glusterd on all nodes
> 
>     # systemctl restart glusterd
> 
>     Enable TLS/SSL encryption on all volumes
> 
>     # gluster volume set <volname> client.ssl on
>     # gluster volume set <volname> server.ssl on
> 
>     Specify access permissions on all hosts
> 
>     # gluster volume set <volname> auth.ssl-allow "host1,host2,host3"
> 
>     Start all volumes
> 
>     # gluster volume start all
> 
> 
> ================================
> 
> LVCACHE SETUP:
> - Needs to be replaced with a UI or manual approach:
> https://access.redhat.com/documentation/en-us/
> red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html-single/
> maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/#config-
> lvmcache


+Sachi - I think there's an equivalent gluster-ansible role to do this?

> -
> https://access.redhat.com/documentation/en-us/
> red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html-single/
> managing_red_hat_gluster_storage_using_rhv_administration_portal/
> ?lb_target=preview#rhv-gluster-brick-mgmt-create lets you assign a cache
> volume but doesn't mention how to create one

This does create lvmcache volume on the device specified. No manual steps needed from user here

> 
> ================================
> 
> REPLACING HOSTS:
> The current host and brick replacement processes use gdeploy:
> https://access.redhat.com/documentation/en-us/
> red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/
> maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/
> replacing_gluster_storage_host
> 
> However, it looks like the gdeploy part is just "prepare the host" - can I
> replace that with a manual "prepare a hyperconverged host" step and keep the
> existing processes?
> The config file used for the host prep step is
> https://access.redhat.com/documentation/en-us/
> red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/
> maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/
> example_gdeploy_configuration_files#ref-replace-host-gdeploy-conf

I think we should be able to come up with a similar playbook using gluster-ansible to replace this section.
Sachi?

> 
> ================================
> 
> CHANGE SHARD SIZE:
> https://access.redhat.com/documentation/en-us/
> red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/
> maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/
> perf?lb_target=preview#task-perf-changing-shard-size
> 
> Uses various gdeploy files but I think all individual steps can be performed
> in UI.

Again, the gdeploy conf samples provided help to create bricks and volumes. I think we have the equivalent steps documented in expanding volume, for the brick creation part (https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html-single/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/?lb_target=preview#creating-bricks-using-ansible) . We can provide a playbook that will allow for volume creation as well.
Sachi, creating a volume should be possible to add into this section?

> Also, shard size changed between 1.0 and 1.1 - are we safe to remove this
> section now or do we actually expect any customers to upgrade from 1.0 to
> 1.6?

That's right - it's a change from 1.0 to 1.1. But there could be customers who want to change the shard-size from older to newer supported 64MB shard size. I think this section is helpful in such cases.

Comment 25 Gobinda Das 2019-04-02 10:30:04 UTC
Created attachment 1550942 [details]
cache_inventory.yml

Comment 26 Gobinda Das 2019-04-02 10:30:51 UTC
Created attachment 1550943 [details]
lvm_cache.yml

Comment 27 Gobinda Das 2019-04-02 10:31:33 UTC
Created attachment 1550944 [details]
replace_node_inventory.yml

Comment 28 Gobinda Das 2019-04-02 10:31:57 UTC
Created attachment 1550945 [details]
replace_node.yml

Comment 29 Gobinda Das 2019-04-02 10:32:32 UTC
Created attachment 1550946 [details]
arbitrated_replicated_inventory.yml

Comment 30 Gobinda Das 2019-04-02 10:33:01 UTC
Created attachment 1550947 [details]
arbitrated_replicated.yml

Comment 31 Gobinda Das 2019-04-02 10:33:42 UTC
Created attachment 1550948 [details]
normal_replicated_inventory.yml

Comment 32 Gobinda Das 2019-04-02 10:34:07 UTC
Created attachment 1550949 [details]
normal_replicated.yml

Comment 36 Mugdha Soni 2019-04-05 04:03:05 UTC
(In reply to Laura Bailey from comment #34)

Laura am not able to reach out to any of the documentation. There is a 404 error encountered every time. Even after the workaround mentioned below am not able to access the documentation in both Firefox and google chrome.

 1. Open a new browser instance.
 2. Go to access.redhat.com and log in.
 3. Copy any of the following URLs into place.
 4. If you get a 404 error, try disabling caching in your browser. Developer Tools > Network tab > Disable cache.
 5. If you click on any links, check that "?lb_target=stage" still appears in the URL. If it doesn't, you're probably not seeing new content. You can copy it into place manually after the title of the book (as seen in the URLs below). 

Could you please provide the PDF for all documentation required in this bug?

Comment 42 SATHEESARAN 2019-04-09 08:08:52 UTC
Thanks Laura for catching up the missing parts and taking proactive actions
to correct the documentation.

Verified the contents of
Installation guide ( single node, RHEL based, RHV based )
Maintenance guide ( single node, RHEL based, RHV based )

All mentions of gdeploy were replaced by gluster-ansible


The replace-host workflow in RHHI-V 1.5 chapter 12 and chapter 13 needs to be retained.
There is no procedure to directly replace the host.
Only change here is node preparation for replace host, will be automated and done via
gluster-ansible.

FOllowing changes are required:
1. Restore chapter 12 & chapter 13 from RHHI-V 1.5 maintenance docs ( both RHEL & RHV )
2. Replace occurrences of replace_host.yml or replace_node.yml with node_prep.yml
3. Update node_prep.yml with following:
Add the following content to the end of node_prep.yml ( inventory file )
<snip>
 cluster_nodes:
      - <new-host.example.com>
    gluster_features_hci_cluster: '{{ cluster_nodes }}'
</snip>

Comment 47 SATHEESARAN 2019-04-10 07:50:08 UTC
Verified with the doc link, all occurrences of gdeploy and gdeploy examples are properly replaced
by gluster-ansible.


Note You need to log in before you can comment on or make changes to this bug.